Submitted by:
| # | Name | Id | |
|---|---|---|---|
| Student 1 | Tomer Danilin | 207933961 | tomerdanilin@campus.technion.ac.il |
| Student 2 | Husain Azaqy | 213382104 | hssainazaqy@campus.technion.ac.il |
Introduction¶
In this assignment we'll create a from-scratch implementation of two fundemental deep learning concepts: the backpropagation algorithm and stochastic gradient descent-based optimizers. In addition, you will create a general-purpose multilayer perceptron, the core building block of deep neural networks. We'll visualize decision bounrdaries and ROC curves in the context of binary classification. Following that we will focus on convolutional networks with residual blocks. We'll create our own network architectures and train them using GPUs on the course servers, then we'll conduct architecture experiments to determine the the effects of different architectural decisions on the performance of deep networks.
General Guidelines¶
- Please read the getting started page on the course website. It explains how to setup, run and submit the assignment.
- Please read the course servers usage guide. It explains how to use and run your code on the course servers to benefit from training with GPUs.
- The text and code cells in these notebooks are intended to guide you through the assignment and help you verify your solutions. The notebooks do not need to be edited at all (unless you wish to play around). The only exception is to fill your name(s) in the above cell before submission. Please do not remove sections or change the order of any cells.
- All your code (and even answers to questions) should be written in the files
within the python package corresponding the assignment number (
hw1,hw2, etc). You can of course use any editor or IDE to work on these files.
$$ \newcommand{\mat}[1]{\boldsymbol {#1}} \newcommand{\mattr}[1]{\boldsymbol {#1}^\top} \newcommand{\matinv}[1]{\boldsymbol {#1}^{-1}} \newcommand{\vec}[1]{\boldsymbol {#1}} \newcommand{\vectr}[1]{\boldsymbol {#1}^\top} \newcommand{\rvar}[1]{\mathrm {#1}} \newcommand{\rvec}[1]{\boldsymbol{\mathrm{#1}}} \newcommand{\diag}{\mathop{\mathrm {diag}}} \newcommand{\set}[1]{\mathbb {#1}} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \newcommand{\pderiv}[2]{\frac{\partial #1}{\partial #2}} \newcommand{\bb}[1]{\boldsymbol{#1}} $$
Part 1: Backpropagation¶
In this part, we'll implement backpropagation and automatic differentiation from scratch and compare our implementations to PyTorch's built in implementation (autograd).
import torch
import unittest
%load_ext autoreload
%autoreload 2
test = unittest.TestCase()
Reminder: The backpropagation algorithm is at the core of training deep models. To state the problem we'll tackle in this notebook, imagine we have an L-layer MLP model, defined as $$ \hat{\vec{y}^i} = \vec{y}_L^i= \varphi_L \left( \mat{W}_L \varphi_{L-1} \left( \cdots \varphi_1 \left( \mat{W}_1 \vec{x}^i + \vec{b}_1 \right) \cdots \right) + \vec{b}_L \right), $$
a pointwise loss function $\ell(\vec{y}, \hat{\vec{y}})$ and an empirical loss over our entire data set, $$ L(\vec{\theta}) = \frac{1}{N} \sum_{i=1}^{N} \ell(\vec{y}^i, \hat{\vec{y}^i}) + R(\vec{\theta}) $$
where $\vec{\theta}$ is a vector containing all network parameters, e.g. $\vec{\theta} = \left[ \mat{W}_{1,:}, \vec{b}_1, \dots, \mat{W}_{L,:}, \vec{b}_L \right]$.
In order to train our model we would like to calculate the derivative (or gradient, in the multivariate case) of the loss with respect to each and every one of the parameters, i.e. $\pderiv{L}{\mat{W}_j}$ and $\pderiv{L}{\vec{b}_j}$ for all $j$. Since the gradient "points" to the direction of functional increase, the negative gradient is often used as a descent direction for descent-based optimization algorithms. In other words, iteratively updating each parameter proportianally to it's negetive gradient can lead to convergence to a local minimum of the loss function.
Calculus tells us that as long as we know the derivatives of all the functions "along the way" ($\varphi_i(\cdot),\ \ell(\cdot,\cdot),\ R(\cdot)$) we can use the chain rule to calculate the derivative of the loss with respect to any one of the parameter vectors. Note that if the loss $L(\vec{\theta})$ is scalar (which is usually the case), the gradient of a parameter will have the same shape as the parameter itself (matrix/vector/tensor of same dimensions).
For deep models that are a composition of many functions, calculating the gradient of each parameter by hand and implementing hard-coded gradient derivations quickly becomes infeasible. Additionally, such code makes models hard to change, since any change potentially requires re-derivation and re-implementation of the entire gradient function.
The backpropagation algorithm, which we saw in the lecture, provides us with a effective method of applying the chain rule recursively so that we can implement gradient calculations of arbitrarily deep or complex models.
We'll now implement backpropagation using a modular approach, which will allow us to chain many components layers together and get automatic gradient calculation of the output with respect to the input or any intermediate parameter.
To do this, we'll define a Layer class. Here's the API of this class:
import hw2.layers as layers
help(layers.Layer)
Help on class Layer in module hw2.layers:
class Layer(abc.ABC)
| A Layer is some computation element in a network architecture which
| supports automatic differentiation using forward and backward functions.
|
| Method resolution order:
| Layer
| abc.ABC
| builtins.object
|
| Methods defined here:
|
| __call__(self, *args, **kwargs)
| Call self as a function.
|
| __init__(self)
| Initialize self. See help(type(self)) for accurate signature.
|
| __repr__(self)
| Return repr(self).
|
| backward(self, dout)
| Computes the backward pass of the layer, i.e. the gradient
| calculation of the final network output with respect to each of the
| parameters of the forward function.
| :param dout: The gradient of the network with respect to the
| output of this layer.
| :return: A tuple with the same number of elements as the parameters of
| the forward function. Each element will be the gradient of the
| network output with respect to that parameter.
|
| forward(self, *args, **kwargs)
| Computes the forward pass of the layer.
| :param args: The computation arguments (implementation specific).
| :return: The result of the computation.
|
| params(self)
| :return: Layer's trainable parameters and their gradients as a list
| of tuples, each tuple containing a tensor and it's corresponding
| gradient tensor.
|
| train(self, training_mode=True)
| Changes the mode of this layer between training and evaluation (test)
| mode. Some layers have different behaviour depending on mode.
| :param training_mode: True: set the model in training mode. False: set
| evaluation mode.
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
|
| ----------------------------------------------------------------------
| Data and other attributes defined here:
|
| __abstractmethods__ = frozenset({'backward', 'forward', 'params'})
In other words, a Layer can be anything: a layer, an activation function, a loss function or generally any computation that we know how to derive a gradient for.
Each Layer must define a forward() function and a backward() function.
- The
forward()function performs the actual calculation/operation of the block and returns an output. - The
backward()function computes the gradient of the input and parameters as a function of the gradient of the output, according to the chain rule.
Here's a diagram illustrating the above explanation:

Note that the diagram doesn't show that if the function is parametrized, i.e. $f(\vec{x},\vec{y})=f(\vec{x},\vec{y};\vec{w})$, there are also gradients to calculate for the parameters $\vec{w}$.
The forward pass is straightforward: just do the computation. To understand the backward pass, imagine that there's some "downstream" loss function $L(\vec{\theta})$ and magically somehow we are told the gradient of that loss with respect to the output $\vec{z}$ of our block, i.e. $\pderiv{L}{\vec{z}}$.
Now, since we know how to calculate the derivative of $f(\vec{x},\vec{y};\vec{w})$, it means we know how to calculate $\pderiv{\vec{z}}{\vec{x}}$, $\pderiv{\vec{z}}{\vec{y}}$ and $\pderiv{\vec{z}}{\vec{w}}$ . Thanks to the chain rule, this is all we need to calculate the gradients of the loss w.r.t. the input and parameters:
$$ \begin{align} \pderiv{L}{\vec{x}} &= \pderiv{L}{\vec{z}}\cdot \pderiv{\vec{z}}{\vec{x}}\\ \pderiv{L}{\vec{y}} &= \pderiv{L}{\vec{z}}\cdot \pderiv{\vec{z}}{\vec{y}}\\ \pderiv{L}{\vec{w}} &= \pderiv{L}{\vec{z}}\cdot \pderiv{\vec{z}}{\vec{w}} \end{align} $$
Comparison with PyTorch¶
PyTorch has the nn.Module base class, which may seem to be similar to our Layer since it also represents a computation element in a network.
However PyTorch's nn.Modules don't compute the gradient directly, they only define the forward calculations.
Instead, PyTorch has a more low-level API for defining a function and explicitly implementing it's forward() and backward(). See autograd.Function.
When an operation is performed on a tensor, it creates a Function instance which performs the operation and
stores any necessary information for calculating the gradient later on. Additionally, Functionss point to the
other Function objects representing the operations performed earlier on the tensor. Thus, a graph (or DAG)
of operations is created (this is not 100% exact, as the graph is actually composed of a different type of class which wraps the backward method, but it's accurate enough for our purposes).
A Tensor instance which was created by performing operations on one or more tensors with requires_grad=True, has a grad_fn property which is a Function instance representing the last operation performed to produce this tensor.
This exposes the graph of Function instances, each with it's own backward() function. Therefore, in PyTorch the backward() function is called on the tensors, not the modules.
Our Layers are therefore a combination of the ideas in Module and Function and we'll implement them together,
just to make things simpler.
Our goal here is to create a "poor man's autograd": We'll use PyTorch tensors,
but we'll calculate and store the gradients in our Layers (or return them).
The gradients we'll calculate are of the entire block, not individual operations on tensors.
To test our implementation, we'll use PyTorch's autograd.
Note that of course this method of tracking gradients is much more limited than what PyTorch offers. However it allows us to implement the backpropagation algorithm very simply and really see how it works.
Let's set up some testing instrumentation:
from hw2.grad_compare import compare_layer_to_torch
def test_block_grad(block: layers.Layer, x, y=None, delta=1e-3):
diffs = compare_layer_to_torch(block, x, y)
# Assert diff values
for diff in diffs:
test.assertLess(diff, delta)
# Show the compare function
compare_layer_to_torch??
Signature: compare_layer_to_torch(layer: hw2.layers.Layer, x, y=None, seed=42) Source: def compare_layer_to_torch(layer: layers.Layer, x, y=None, seed=42): """ Compares the manually calculated gradients of a Layer (it's backward function) to the gradients produced by PyTorch's autograd. """ # Forward pass torch.manual_seed(seed) z = layer(x, y=y) # Invent some output gradient dz = torch.randn(*z.shape) if z.dim() > 0 else torch.tensor(1.0) # Backward pass (ours) dx = layer.backward(dz) # Attach autograd gradients to params and input and re-run forward pass on # the same input for t, _ in layer.params() + [(x, None)]: t.requires_grad = True torch.manual_seed(seed) z = layer(x, y=y) # Backward pass (this time with PyTorch autograd) z.backward(dz) print("Comparing gradients... ") diffs = [] # Compare input gradient dx_autograd = x.grad diffs.append(torch.norm(dx_autograd - dx)) print(f'{"input":8s} diff={diffs[-1]:.3f}') # Compare parameter gradients for i, (p, dp) in enumerate(layer.params()): dp_autograd = p.grad diffs.append(torch.norm(dp_autograd - dp)) print(f"param#{i+1:02d} diff={diffs[-1]:.3f}") return diffs File: c:\users\kingh\desktop\semester7\deep learning on computation accelerators\dl_on_computational_accelerators_hw\hw2\hw2_2024\hw2\grad_compare.py Type: function
Notes:
- After you complete your implementation, you should make sure to read and understand the
compare_layer_to_torch()function. It will help you understand what PyTorch is doing. - The value of
deltaabove is should not be needed. A correct implementation will give you adiffof exactly zero.
Layer Implementations¶
We'll now implement some Layers that will enable us to later build an MLP model of arbitrary depth, complete with automatic differentiation.
For each block, you'll first implement the forward() function.
Then, you will calculate the derivative of the block by hand with respect to each of its
input tensors and each of its parameter tensors (if any).
Using your manually-calculated derivation, you can then implement the backward() function.
Notice that we have intermediate Jacobians that are potentially high dimensional tensors. For example in the expression $\pderiv{L}{\vec{w}} = \pderiv{L}{\vec{z}}\cdot \pderiv{\vec{z}}{\vec{w}}$, the term $\pderiv{\vec{z}}{\vec{w}}$ is a 4D Jacobian if both $\vec{z}$ and $\vec{w}$ are 2D matrices.
In order to implement the backpropagation algorithm efficiently, we need to implement every backward function without explicitly constructing this Jacobian. Instead, we're interested in directly calculating the vector-Jacobian product (VJP) $\pderiv{L}{\vec{z}}\cdot \pderiv{\vec{z}}{\vec{w}}$. In order to do this, you should try to figure out the gradient of the loss with respect to one element, e.g. $\pderiv{L}{\vec{w}_{1,1}}$ and extrapolate from there how to directly obtain the VJP.
Activation functions¶
(Leaky) ReLU¶
ReLU, or rectified linear unit is a very common activation function in deep learning architectures. In it's most standard form, as we'll implement here, it has no parameters.
We'll first implement the "leaky" version, defined as
$$ \mathrm{relu}(\vec{x}) = \max(\alpha\vec{x},\vec{x}), \ 0\leq\alpha<1 $$
This is similar to the ReLU activation we've seen in class, only that it has a small non-zero slope then it's input is negative. Note that it's not strictly differentiable, however it has sub-gradients, defined separately any positive-valued input and for negative-valued input.
TODO: Complete the implementation of the LeakyReLU class in the hw2/layers.py module.
N = 100
in_features = 200
num_classes = 10
eps = 1e-6
# Test LeakyReLU
alpha = 0.1
lrelu = layers.LeakyReLU(alpha=alpha)
x_test = torch.randn(N, in_features)
# Test forward pass
z = lrelu(x_test)
test.assertSequenceEqual(z.shape, x_test.shape)
test.assertTrue(torch.allclose(z, torch.nn.LeakyReLU(alpha)(x_test), atol=eps))
# Test backward pass
test_block_grad(lrelu, x_test)
Comparing gradients... input diff=0.000
Now using the LeakyReLU, we can trivially define a regular ReLU block as a special case.
TODO: Complete the implementation of the ReLU class in the hw2/layers.py module.
# Test ReLU
relu = layers.ReLU()
x_test = torch.randn(N, in_features)
# Test forward pass
z = relu(x_test)
test.assertSequenceEqual(z.shape, x_test.shape)
test.assertTrue(torch.allclose(z, torch.relu(x_test), atol=eps))
# Test backward pass
test_block_grad(relu, x_test)
Comparing gradients... input diff=0.000
Sigmoid¶
The sigmoid function $\sigma(x)$ is also sometimes used as an activation function. We have also seen it previously in the context of logistic regression.
The sigmoid function is defined as
$$ \sigma(\vec{x}) = \frac{1}{1+\exp(-\vec{x})}. $$
# Test Sigmoid
sigmoid = layers.Sigmoid()
x_test = torch.randn(N, in_features, in_features) # 3D input should work
# Test forward pass
z = sigmoid(x_test)
test.assertSequenceEqual(z.shape, x_test.shape)
test.assertTrue(torch.allclose(z, torch.sigmoid(x_test), atol=eps))
# Test backward pass
test_block_grad(sigmoid, x_test)
Comparing gradients... input diff=0.000
Hyperbolic Tangent¶
The hyperbolic tangent function $\tanh(x)$ is a common activation function used when the output should be in the range [-1, 1].
The tanh function is defined as
$$ \tanh(\vec{x}) = \frac{\exp(x)-\exp(-x)}{\exp(x)+\exp(-\vec{x})}. $$
# Test TanH
tanh = layers.TanH()
x_test = torch.randn(N, in_features, in_features) # 3D input should work
# Test forward pass
z = tanh(x_test)
test.assertSequenceEqual(z.shape, x_test.shape)
test.assertTrue(torch.allclose(z, torch.tanh(x_test), atol=eps))
# Test backward pass
test_block_grad(tanh, x_test)
Comparing gradients... input diff=0.000
Linear (fully connected) layer¶
First, we'll implement an affine transform layer, also known as a fully connected layer.
Given an input $\mat{X}$ the layer computes,
$$ \mat{Z} = \mat{X} \mattr{W} + \vec{b} ,~ \mat{X}\in\set{R}^{N\times D_{\mathrm{in}}},~ \mat{W}\in\set{R}^{D_{\mathrm{out}}\times D_{\mathrm{in}}},~ \vec{b}\in\set{R}^{D_{\mathrm{out}}}. $$
Notes:
- We write it this way to follow the implementation conventions.
- $N$ is the number of samples in the input (batch size). The input $\mat{X}$ will always be a tensor containing a batch dimension first.
- Thanks to broadcasting, $\vec{b}$ can remain a vector even though the input $\mat{X}$ is a matrix.
TODO: Complete the implementation of the Linear class in the hw2/layers.py module.
# Test Linear
out_features = 1000
fc = layers.Linear(in_features, out_features)
x_test = torch.randn(N, in_features)
# Test forward pass
z = fc(x_test)
test.assertSequenceEqual(z.shape, [N, out_features])
torch_fc = torch.nn.Linear(in_features, out_features,bias=True)
torch_fc.weight = torch.nn.Parameter(fc.w)
torch_fc.bias = torch.nn.Parameter(fc.b)
test.assertTrue(torch.allclose(torch_fc(x_test), z, atol=eps))
# Test backward pass
test_block_grad(fc, x_test)
# Test second backward pass
x_test = torch.randn(N, in_features)
z = fc(x_test)
z = fc(x_test)
test_block_grad(fc, x_test)
Comparing gradients... input diff=0.000 param#01 diff=0.000 param#02 diff=0.000 Comparing gradients... input diff=0.000 param#01 diff=0.000 param#02 diff=0.000
Cross-Entropy Loss¶
As you know by know, cross-entropy is a common loss function for classification tasks. In class, we defined it as
$$\ell_{\mathrm{CE}}(\vec{y},\hat{\vec{y}}) = - {\vectr{y}} \log(\hat{\vec{y}})$$
where $\hat{\vec{y}} = \mathrm{softmax}(x)$ is a probability vector (the output of softmax on the class scores $\vec{x}$) and the vector $\vec{y}$ is a 1-hot encoded class label.
However, it's tricky to compute the gradient of softmax, so instead we'll define a version of cross-entropy that produces the exact same output but works directly on the class scores $\vec{x}$.
We can write, $$\begin{align} \ell_{\mathrm{CE}}(\vec{y},\hat{\vec{y}}) &= - {\vectr{y}} \log(\hat{\vec{y}}) = - {\vectr{y}} \log\left(\mathrm{softmax}(\vec{x})\right) \\ &= - {\vectr{y}} \log\left(\frac{e^{\vec{x}}}{\sum_k e^{x_k}}\right) \\ &= - \log\left(\frac{e^{x_y}}{\sum_k e^{x_k}}\right) \\ &= - \left(\log\left(e^{x_y}\right) - \log\left(\sum_k e^{x_k}\right)\right)\\ &= - x_y + \log\left(\sum_k e^{x_k}\right) \end{align}$$
Where the scalar $y$ is the correct class label, so $x_y$ is the correct class score.
Note that this version of cross entropy is also what's provided by PyTorch's nn module.
TODO: Complete the implementation of the CrossEntropyLoss class in the hw2/layers.py module.
# Test CrossEntropy
cross_entropy = layers.CrossEntropyLoss()
scores = torch.randn(N, num_classes)
labels = torch.randint(low=0, high=num_classes, size=(N,), dtype=torch.long)
# Test forward pass
loss = cross_entropy(scores, labels)
expected_loss = torch.nn.functional.cross_entropy(scores, labels)
test.assertLess(torch.abs(expected_loss-loss).item(), 1e-5)
print('loss=', loss.item())
# Test backward pass
test_block_grad(cross_entropy, scores, y=labels)
loss= 2.7283618450164795 Comparing gradients... input diff=0.000
Building Models¶
Now that we have some working Layers, we can build an MLP model of arbitrary depth and compute end-to-end gradients.
First, lets copy an idea from PyTorch and implement our own version of the nn.Sequential Module.
This is a Layer which contains other Layers and calls them in sequence. We'll use this to build our MLP model.
TODO: Complete the implementation of the Sequential class in the hw2/layers.py module.
# Test Sequential
# Let's create a long sequence of layers and see
# whether we can compute end-to-end gradients of the whole thing.
seq = layers.Sequential(
layers.Linear(in_features, 100),
layers.Linear(100, 200),
layers.Linear(200, 100),
layers.ReLU(),
layers.Linear(100, 500),
layers.LeakyReLU(alpha=0.01),
layers.Linear(500, 200),
layers.ReLU(),
layers.Linear(200, 500),
layers.LeakyReLU(alpha=0.1),
layers.Linear(500, 1),
layers.Sigmoid(),
)
x_test = torch.randn(N, in_features)
# Test forward pass
z = seq(x_test)
test.assertSequenceEqual(z.shape, [N, 1])
# Test backward pass
test_block_grad(seq, x_test)
Comparing gradients... input diff=0.000 param#01 diff=0.000 param#02 diff=0.000 param#03 diff=0.000 param#04 diff=0.000 param#05 diff=0.000 param#06 diff=0.000 param#07 diff=0.000 param#08 diff=0.000 param#09 diff=0.000 param#10 diff=0.000 param#11 diff=0.000 param#12 diff=0.000 param#13 diff=0.000 param#14 diff=0.000
Now, equipped with a Sequential, all we have to do is create an MLP architecture.
We'll define our MLP with the following hyperparameters:
- Number of input features, $D$.
- Number of output classes, $C$.
- Sizes of hidden layers, $h_1,\dots,h_L$.
So the architecture will be:
FC($D$, $h_1$) $\rightarrow$ ReLU $\rightarrow$ FC($h_1$, $h_2$) $\rightarrow$ ReLU $\rightarrow$ $\cdots$ $\rightarrow$ FC($h_{L-1}$, $h_L$) $\rightarrow$ ReLU $\rightarrow$ FC($h_{L}$, $C$)
We'll also create a sequence of the above MLP and a cross-entropy loss, since it's the gradient of the loss that we need in order to train a model.
TODO: Complete the implementation of the MLP class in the hw2/layers.py module. Ignore the dropout parameter for now.
# Create an MLP model
mlp = layers.MLP(in_features, num_classes, hidden_features=[100, 50, 100])
print(mlp)
MLP, Sequential [0] Linear(self.in_features=200, self.out_features=100) [1] ReLU [2] Linear(self.in_features=100, self.out_features=50) [3] ReLU [4] Linear(self.in_features=50, self.out_features=100) [5] ReLU [6] Linear(self.in_features=100, self.out_features=10)
# Test MLP architecture
N = 100
in_features = 10
num_classes = 10
for activation in ('relu', 'sigmoid'):
mlp = layers.MLP(in_features, num_classes, hidden_features=[100, 50, 100], activation=activation)
test.assertEqual(len(mlp.sequence), 7)
num_linear = 0
for b1, b2 in zip(mlp.sequence, mlp.sequence[1:]):
if (str(b2).lower() == activation):
test.assertTrue(str(b1).startswith('Linear'))
num_linear += 1
test.assertTrue(str(mlp.sequence[-1]).startswith('Linear'))
test.assertEqual(num_linear, 3)
# Test MLP gradients
# Test forward pass
x_test = torch.randn(N, in_features)
labels = torch.randint(low=0, high=num_classes, size=(N,), dtype=torch.long)
z = mlp(x_test)
test.assertSequenceEqual(z.shape, [N, num_classes])
# Create a sequence of MLPs and CE loss
seq_mlp = layers.Sequential(mlp, layers.CrossEntropyLoss())
loss = seq_mlp(x_test, y=labels)
test.assertEqual(loss.dim(), 0)
print(f'MLP loss={loss}, activation={activation}')
# Test backward pass
test_block_grad(seq_mlp, x_test, y=labels)
MLP loss=2.309244155883789, activation=relu Comparing gradients... input diff=0.000 param#01 diff=0.000 param#02 diff=0.000 param#03 diff=0.000 param#04 diff=0.000 param#05 diff=0.000 param#06 diff=0.000 param#07 diff=0.000 param#08 diff=0.000 MLP loss=2.3934404850006104, activation=sigmoid Comparing gradients... input diff=0.000 param#01 diff=0.000 param#02 diff=0.000 param#03 diff=0.000 param#04 diff=0.000 param#05 diff=0.000 param#06 diff=0.000 param#07 diff=0.000 param#08 diff=0.000
If the above tests passed then congratulations - you've now implemented an arbitrarily deep model and loss function with end-to-end automatic differentiation!
Questions¶
TODO Answer the following questions. Write your answers in the appropriate variables in the module hw2/answers.py.
from cs236781.answers import display_answer
import hw2.answers
Question 1¶
Suppose we have a linear (i.e. fully-connected) layer with a weight tensor $\mat{W}$, defined with in_features=1024 and out_features=512. We apply this layer to an input tensor $\mat{X}$ containing a batch of N=64 samples. The output of the layer is denoted as $\mat{Y}$.
Consider the Jacobian tensor $\pderiv{\mat{Y}}{\mat{X}}$ of the output of the layer w.r.t. the input $\mat{X}$.
- What is the shape of this tensor?
- Is this Jacobian sparse (most elements zero by definition)? If so, why and which elements?
- Given the gradient of the output w.r.t. some downstream scalar loss $L$, $\delta\mat{Y}=\pderiv{L}{\mat{Y}}$, do we need to materialize the above Jacobian in order to calculate the downstream gratdient w.r.t. to the input ($\delta\mat{X}$)? If yes, explain why; if no, show how to calcualte it without materializing the Jacobian.
Consider the Jacobian tensor $\pderiv{\mat{Y}}{\mat{W}}$ of the output of the layer w.r.t. the layer weights $\mat{W}$. Answer questions A-C about it as well.
display_answer(hw2.answers.part1_q1)
Q1:
A. The shape comes out to be 64x512x64x1024
B. Yes, the Jacobian is sparse, because it represents how each weight in W affects the predictions. In other words, it shows how each element in the predictions is affected by the weights W. For any sample i such that 1 <= i <=64, we have that the block corresponding to how the sample Xj for some j != i affects the output Yi is completely zero. Thus, the Jacobian is effectively block diagonal with 64 blocks, therefore sparse.
C.No, we do not need to materialize the Jacobian to calculate a downstream gradient with respect to the given input, $\delta\mathbf{X} = \frac{\partial L}{\partial \mathbf{X}}$. This is because the gradient can be computed using the chain rule. Specifically, $\delta\mathbf{X}$ can be obtained as $\delta\mathbf{X} = \delta\mathbf{Y} \mathbf{W}$, where $\delta\mathbf{Y} = \frac{\partial L}{\partial \mathbf{Y}}$ is the gradient of the loss with respect to the output, and $\mathbf{W}$ is the weight matrix. This avoids explicitly constructing the large Jacobian.
Q2:
A. The shape comes out to be 64x512x512x1024
B. No the jacobian is not sparse by any garentee. Since every element of the output Y depends on all elements of W. Given W is any matrix, no sparesity is garenteed.
C. No, we do no need to materialize the Jacobian to compute: $\delta\mathbf{W} = \frac{\partial L}{\partial \mathbf{W}}$. We can compute $\delta\mathbf{W}$ directly using the following:
$$ \delta\mathbf{W} = \delta\mathbf{Y}^\top \mathbf{X}, $$
Again avoiding explicitly forming the Jacobian for computation.
Question 2¶
Is back-propagation required in order to train neural networks with decent-based optimization? Why or why not?
display_answer(hw2.answers.part1_q2)
No, back-propagation is not absolutely required to train neural networks using gradient based optimization , yet it makes the training way more feasible.
Back propagation is effectively essential for gradient based optimization:** The fundamental idea behind gradient-based training is to adjust the weights of the neural network in a way that minimizes a loss function. To do this effectively, you need to compute the gradients of the loss function with respect to each of the weights in the network. Doing that the naive way would be way to slow for networks of most medium to large sizes or architectures
$$ \newcommand{\mat}[1]{\boldsymbol {#1}} \newcommand{\mattr}[1]{\boldsymbol {#1}^\top} \newcommand{\matinv}[1]{\boldsymbol {#1}^{-1}} \newcommand{\vec}[1]{\boldsymbol {#1}} \newcommand{\vectr}[1]{\boldsymbol {#1}^\top} \newcommand{\rvar}[1]{\mathrm {#1}} \newcommand{\rvec}[1]{\boldsymbol{\mathrm{#1}}} \newcommand{\diag}{\mathop{\mathrm {diag}}} \newcommand{\set}[1]{\mathbb {#1}} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \newcommand{\pderiv}[2]{\frac{\partial #1}{\partial #2}} \newcommand{\bb}[1]{\boldsymbol{#1}} $$
Part 2: Optimization and Training¶
In this part we will learn how to implement optimization algorithms for deep networks. Additionally, we'll learn how to write training loops and implement a modular model trainer. We'll use our optimizers and training code to test a few configurations for classifying images with an MLP model.
import os
import numpy as np
import matplotlib.pyplot as plt
import unittest
import torch
import torchvision
import torchvision.transforms as tvtf
%matplotlib inline
%load_ext autoreload
%autoreload 2
seed = 42
plt.rcParams.update({'font.size': 12})
test = unittest.TestCase()
Implementing Optimization Algorithms¶
In the context of deep learning, an optimization algorithm is some method of iteratively updating model parameters so that the loss converges toward some local minimum (which we hope will be good enough).
Gradient descent-based methods are by far the most popular algorithms for optimization of neural network parameters. However the high-dimensional loss-surfaces we encounter in deep learning applications are highly non-convex. They may be riddled with local minima, saddle points, large plateaus and a host of very challenging "terrain" for gradient-based optimization. This gave rise to many different methods of performing the parameter updates based on the loss gradients, aiming to tackle these optimization challenges.
The most basic gradient-based update rule can be written as,
$$ \vec{\theta} \leftarrow \vec{\theta} - \eta \nabla_{\vec{\theta}} L(\vec{\theta}; \mathcal{D}) $$
where $\mathcal{D} = \left\{ (\vec{x}^i, \vec{y}^i) \right\}_{i=1}^{M}$ is our training dataset or part of it. Specifically, if we have in total $N$ training samples, then
- If $M=N$ this is known as regular gradient descent. If the dataset does not fit in memory the gradient of this loss becomes infeasible to compute.
- If $M=1$, the loss is computed w.r.t. a single different sample each time. This is known as stochastic gradient descent.
- If $1<M<N$ this is known as stochastic mini-batch gradient descent. This is the most commonly-used option.
The intuition behind gradient descent is simple: since the gradient of a multivariate function points to the direction of steepest ascent ("uphill"), we move in the opposite direction. A small step size $\eta$ known as the learning rate is required since the gradient can only serve as a first-order linear approximation of the function's behaviour at $\vec{\theta}$ (recall e.g. the Taylor expansion). However in truth our loss surface generally has nontrivial curvature caused by a high order nonlinear dependency on $\vec{\theta}$. Thus taking a large step in the direction of the gradient is actually just as likely to increase the function value.

The idea behind the stochastic versions is that by constantly changing the samples we compute the loss with, we get a dynamic error surface, i.e. it's different for each set of training samples. This is thought to generally improve the optimization since it may help the optimizer get out of flat regions or sharp local minima since these features may disappear in the loss surface of subsequent batches. The image below illustrates this. The different lines are different 1-dimensional losses for different training set-samples.

Deep learning frameworks generally provide implementations of various gradient-based optimization algorithms.
Here we'll implement our own optimization module from scratch, this time keeping a similar API to the PyTorch optim package.
We define a base Optimizer class. An optimizer holds a set of parameter tensors (these are the trainable parameters of some model) and maintains internal state. It may be used as follows:
- After the forward pass has been performed the optimizer's
zero_grad()function is invoked to clear the parameter gradients computed by previous iterations. - After the backward pass has been performed, and gradients have been calculated for these parameters, the optimizer's
step()function is invoked in order to update the value of each parameter based on it's gradient.
The exact method of update is implementation-specific for each optimizer and may depend on its internal state. In addition, adding the regularization penalty to the gradient is handled by the optimizer since it only depends on the parameter values (and not the data).
Here's the API of our Optimizer:
import hw2.optimizers as optimizers
help(optimizers.Optimizer)
Help on class Optimizer in module hw2.optimizers:
class Optimizer(abc.ABC)
| Optimizer(params)
|
| Base class for optimizers.
|
| Method resolution order:
| Optimizer
| abc.ABC
| builtins.object
|
| Methods defined here:
|
| __init__(self, params)
| :param params: A sequence of model parameters to optimize. Can be a
| list of (param,grad) tuples as returned by the Layers, or a list of
| pytorch tensors in which case the grad will be taken from them.
|
| step(self)
| Updates all the registered parameter values based on their gradients.
|
| zero_grad(self)
| Sets the gradient of the optimized parameters to zero (in place).
|
| ----------------------------------------------------------------------
| Readonly properties defined here:
|
| params
| :return: A sequence of parameter tuples, each tuple containing
| (param_data, param_grad). The data should be updated in-place
| according to the grad.
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
|
| ----------------------------------------------------------------------
| Data and other attributes defined here:
|
| __abstractmethods__ = frozenset({'step'})
Vanilla SGD with Regularization¶
Let's start by implementing the simplest gradient based optimizer. The update rule will be exacly as stated above, but we'll also add a L2-regularization term to the gradient. Remember that in the loss function, the L2 regularization term is expressed by
$$R(\vec{\theta}) = \frac{1}{2}\lambda||\vec{\theta}||^2_2.$$
TODO: Complete the implementation of the VanillaSGD class in the hw2/optimizers.py module.
# Test VanillaSGD
torch.manual_seed(42)
p = torch.randn(500, 10)
dp = torch.randn(*p.shape)*2
params = [(p, dp)]
vsgd = optimizers.VanillaSGD(params, learn_rate=0.5, reg=0.1)
vsgd.step()
expected_p = torch.load('tests/assets/expected_vsgd.pt')
diff = torch.norm(p-expected_p).item()
print(f'diff={diff}')
test.assertLess(diff, 1e-3)
diff=0.0
Training¶
Now that we can build a model and loss function, compute their gradients and we have an optimizer, we can finally do some training!
In the spirit of more modular software design, we'll implement a class that will aid us in automating the repetitive training loop code that we usually write over and over again. This will be useful for both training our Layer-based models and also later for training PyTorch nn.Modules.
Here's our Trainer API:
import hw2.training as training
help(training.Trainer)
Help on class Trainer in module hw2.training:
class Trainer(abc.ABC)
| Trainer(model: torch.nn.modules.module.Module, device: Union[torch.device, NoneType] = None)
|
| A class abstracting the various tasks of training models.
|
| Provides methods at multiple levels of granularity:
| - Multiple epochs (fit)
| - Single epoch (train_epoch/test_epoch)
| - Single batch (train_batch/test_batch)
|
| Method resolution order:
| Trainer
| abc.ABC
| builtins.object
|
| Methods defined here:
|
| __init__(self, model: torch.nn.modules.module.Module, device: Union[torch.device, NoneType] = None)
| Initialize the trainer.
| :param model: Instance of the model to train.
| :param device: torch.device to run training on (CPU or GPU).
|
| fit(self, dl_train: torch.utils.data.dataloader.DataLoader, dl_test: torch.utils.data.dataloader.DataLoader, num_epochs: int, checkpoints: str = None, early_stopping: int = None, print_every: int = 1, **kw) -> cs236781.train_results.FitResult
| Trains the model for multiple epochs with a given training set,
| and calculates validation loss over a given validation set.
| :param dl_train: Dataloader for the training set.
| :param dl_test: Dataloader for the test set.
| :param num_epochs: Number of epochs to train for.
| :param checkpoints: Whether to save model to file every time the
| test set accuracy improves. Should be a string containing a
| filename without extension.
| :param early_stopping: Whether to stop training early if there is no
| test loss improvement for this number of epochs.
| :param print_every: Print progress every this number of epochs.
| :return: A FitResult object containing train and test losses per epoch.
|
| save_checkpoint(self, checkpoint_filename: str)
| Saves the model in it's current state to a file with the given name (treated
| as a relative path).
| :param checkpoint_filename: File name or relative path to save to.
|
| test_batch(self, batch) -> cs236781.train_results.BatchResult
| Runs a single batch forward through the model and calculates loss.
| :param batch: A single batch of data from a data loader (might
| be a tuple of data and labels or anything else depending on
| the underlying dataset.
| :return: A BatchResult containing the value of the loss function and
| the number of correctly classified samples in the batch.
|
| test_epoch(self, dl_test: torch.utils.data.dataloader.DataLoader, **kw) -> cs236781.train_results.EpochResult
| Evaluate model once over a test set (single epoch).
| :param dl_test: DataLoader for the test set.
| :param kw: Keyword args supported by _foreach_batch.
| :return: An EpochResult for the epoch.
|
| train_batch(self, batch) -> cs236781.train_results.BatchResult
| Runs a single batch forward through the model, calculates loss,
| preforms back-propagation and updates weights.
| :param batch: A single batch of data from a data loader (might
| be a tuple of data and labels or anything else depending on
| the underlying dataset.
| :return: A BatchResult containing the value of the loss function and
| the number of correctly classified samples in the batch.
|
| train_epoch(self, dl_train: torch.utils.data.dataloader.DataLoader, **kw) -> cs236781.train_results.EpochResult
| Train once over a training set (single epoch).
| :param dl_train: DataLoader for the training set.
| :param kw: Keyword args supported by _foreach_batch.
| :return: An EpochResult for the epoch.
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
|
| ----------------------------------------------------------------------
| Data and other attributes defined here:
|
| __abstractmethods__ = frozenset({'test_batch', 'train_batch'})
The Trainer class splits the task of training (and evaluating) models into three conceptual levels,
- Multiple epochs - the
fitmethod, which returns aFitResultcontaining losses and accuracies for all epochs. - Single epoch - the
train_epochandtest_epochmethods, which return anEpochResultcontaining losses per batch and the single accuracy result of the epoch. - Single batch - the
train_batchandtest_batchmethods, which return aBatchResultcontaining a single loss and the number of correctly classified samples in the batch.
It implements the first two levels. Inheriting classes are expected to implement the single-batch level methods since these are model and/or task specific.
The first thing we should do in order to verify our model, gradient calculations and optimizer implementation is to try to overfit a large model (many parameters) to a small dataset (few images). This will show us that things are working properly.
Let's begin by loading the CIFAR-10 dataset.
data_dir = os.path.expanduser('~/.pytorch-datasets')
ds_train = torchvision.datasets.CIFAR10(root=data_dir, download=True, train=True, transform=tvtf.ToTensor())
ds_test = torchvision.datasets.CIFAR10(root=data_dir, download=True, train=False, transform=tvtf.ToTensor())
print(f'Train: {len(ds_train)} samples')
print(f'Test: {len(ds_test)} samples')
Files already downloaded and verified Files already downloaded and verified Train: 50000 samples Test: 10000 samples
Now, let's implement just a small part of our training logic since that's what we need right now.
TODO:
- Complete the implementation of the
train_batch()method in theLayerTrainerclass within thehw2/training.pymodule. - Update the hyperparameter values in the
part2_overfit_hp()function in thehw2/answers.pymodule. Tweak the hyperparameter values until your model overfits a small number of samples in the code block below. You should get 100% accuracy within a few epochs.
The following code block will use your custom Layer-based MLP implentation, custom Vanilla SGD and custom trainer to overfit the data. The classification accuracy should be 100% within a few epochs.
import hw2.layers as layers
import hw2.answers as answers
from torch.utils.data import DataLoader
# Overfit to a very small dataset of 20 samples
batch_size = 10
max_batches = 2
dl_train = torch.utils.data.DataLoader(ds_train, batch_size, shuffle=False)
# Get hyperparameters
hp = answers.part2_overfit_hp()
torch.manual_seed(seed)
# Build a model and loss using our custom MLP and CE implementations
model = layers.MLP(3*32*32, num_classes=10, hidden_features=[128]*3, wstd=hp['wstd'])
loss_fn = layers.CrossEntropyLoss()
# Use our custom optimizer
optimizer = optimizers.VanillaSGD(model.params(), learn_rate=hp['lr'], reg=hp['reg'])
# Run training over small dataset multiple times
trainer = training.LayerTrainer(model, loss_fn, optimizer)
best_acc = 0
for i in range(20):
res = trainer.train_epoch(dl_train, max_batches=max_batches)
best_acc = res.accuracy if res.accuracy > best_acc else best_acc
test.assertGreaterEqual(best_acc, 98)
train_batch: 0%| | 0/2 [00:00<?, ?it/s]
train_batch: 0%| | 0/2 [00:00<?, ?it/s]
train_batch: 0%| | 0/2 [00:00<?, ?it/s]
train_batch: 0%| | 0/2 [00:00<?, ?it/s]
train_batch: 0%| | 0/2 [00:00<?, ?it/s]
train_batch: 0%| | 0/2 [00:00<?, ?it/s]
train_batch: 0%| | 0/2 [00:00<?, ?it/s]
train_batch: 0%| | 0/2 [00:00<?, ?it/s]
train_batch: 0%| | 0/2 [00:00<?, ?it/s]
train_batch: 0%| | 0/2 [00:00<?, ?it/s]
train_batch: 0%| | 0/2 [00:00<?, ?it/s]
train_batch: 0%| | 0/2 [00:00<?, ?it/s]
train_batch: 0%| | 0/2 [00:00<?, ?it/s]
train_batch: 0%| | 0/2 [00:00<?, ?it/s]
train_batch: 0%| | 0/2 [00:00<?, ?it/s]
train_batch: 0%| | 0/2 [00:00<?, ?it/s]
train_batch: 0%| | 0/2 [00:00<?, ?it/s]
train_batch: 0%| | 0/2 [00:00<?, ?it/s]
train_batch: 0%| | 0/2 [00:00<?, ?it/s]
train_batch: 0%| | 0/2 [00:00<?, ?it/s]
Now that we know training works, let's try to fit a model to a bit more data for a few epochs, to see how well we're doing. First, we need a function to plot the FitResults object.
from cs236781.plot import plot_fit
plot_fit?
Signature: plot_fit( fit_res: cs236781.train_results.FitResult, fig=None, log_loss=False, legend=None, train_test_overlay: bool = False, ) Docstring: Plots a FitResult object. Creates four plots: train loss, test loss, train acc, test acc. :param fit_res: The fit result to plot. :param fig: A figure previously returned from this function. If not None, plots will the added to this figure. :param log_loss: Whether to plot the losses in log scale. :param legend: What to call this FitResult in the legend. :param train_test_overlay: Whether to overlay train/test plots on the same axis. :return: The figure. File: c:\users\kingh\desktop\semester7\deep learning on computation accelerators\dl_on_computational_accelerators_hw\hw2\hw2_2024\cs236781\plot.py Type: function
TODO:
- Complete the implementation of the
test_batch()method in theLayerTrainerclass within thehw2/training.pymodule. - Implement the
fit()method of theTrainerclass within thehw2/training.pymodule. - Tweak the hyperparameters for this section in the
part2_optim_hp()function in thehw2/answers.pymodule. - Run the following code blocks to train. Try to get above 35-40% test-set accuracy.
# Define a larger part of the CIFAR-10 dataset (still not the whole thing)
batch_size = 50
max_batches = 100
in_features = 3*32*32
num_classes = 10
dl_train = torch.utils.data.DataLoader(ds_train, batch_size, shuffle=False)
dl_test = torch.utils.data.DataLoader(ds_test, batch_size//2, shuffle=False)
# Define a function to train a model with our Trainer and various optimizers
def train_with_optimizer(opt_name, opt_class, fig):
torch.manual_seed(seed)
# Get hyperparameters
hp = answers.part2_optim_hp()
hidden_features = [128] * 5
num_epochs = 10
# Create model, loss and optimizer instances
model = layers.MLP(in_features, num_classes, hidden_features, wstd=hp['wstd'])
loss_fn = layers.CrossEntropyLoss()
optimizer = opt_class(model.params(), learn_rate=hp[f'lr_{opt_name}'], reg=hp['reg'])
# Train with the Trainer
trainer = training.LayerTrainer(model, loss_fn, optimizer)
fit_res = trainer.fit(dl_train, dl_test, num_epochs, max_batches=max_batches)
fig, axes = plot_fit(fit_res, fig=fig, legend=opt_name)
return fig
fig_optim = None
fig_optim = train_with_optimizer('vanilla', optimizers.VanillaSGD, fig_optim)
--- EPOCH 1/10 ---
train_batch: 0%| | 0/100 [00:00<?, ?it/s]
test_batch: 0%| | 0/100 [00:00<?, ?it/s]
--- EPOCH 2/10 ---
train_batch: 0%| | 0/100 [00:00<?, ?it/s]
test_batch: 0%| | 0/100 [00:00<?, ?it/s]
--- EPOCH 3/10 ---
train_batch: 0%| | 0/100 [00:00<?, ?it/s]
test_batch: 0%| | 0/100 [00:00<?, ?it/s]
--- EPOCH 4/10 ---
train_batch: 0%| | 0/100 [00:00<?, ?it/s]
test_batch: 0%| | 0/100 [00:00<?, ?it/s]
--- EPOCH 5/10 ---
train_batch: 0%| | 0/100 [00:00<?, ?it/s]
test_batch: 0%| | 0/100 [00:00<?, ?it/s]
--- EPOCH 6/10 ---
train_batch: 0%| | 0/100 [00:00<?, ?it/s]
test_batch: 0%| | 0/100 [00:00<?, ?it/s]
--- EPOCH 7/10 ---
train_batch: 0%| | 0/100 [00:00<?, ?it/s]
test_batch: 0%| | 0/100 [00:00<?, ?it/s]
--- EPOCH 8/10 ---
train_batch: 0%| | 0/100 [00:00<?, ?it/s]
test_batch: 0%| | 0/100 [00:00<?, ?it/s]
--- EPOCH 9/10 ---
train_batch: 0%| | 0/100 [00:00<?, ?it/s]
test_batch: 0%| | 0/100 [00:00<?, ?it/s]
--- EPOCH 10/10 ---
train_batch: 0%| | 0/100 [00:00<?, ?it/s]
test_batch: 0%| | 0/100 [00:00<?, ?it/s]
Momentum¶
The simple vanilla SGD update is rarely used in practice since it's very slow to converge relative to other optimization algorithms.
One reason is that naïvely updating in the direction of the current gradient causes it to fluctuate wildly in areas where the loss surface in some dimensions is much steeper than in others. Another reason is that using the same learning rate for all parameters is not a great idea since not all parameters are created equal. For example, parameters associated with rare features should be updated with a larger step than ones associated with commonly-occurring features because they'll get less updates through the gradients.
Therefore more advanced optimizers take into account the previous gradients of a parameter and/or try to use a per-parameter specific learning rate instead of a common one.
Let's now implement a simple and common optimizer: SGD with Momentum. This optimizer takes previous gradients of a parameter into account when updating it's value instead of just the current one. In practice it usually provides faster convergence than the vanilla SGD.
The SGD with Momentum update rule can be stated as follows: $$\begin{align} \vec{v}_{t+1} &= \mu \vec{v}_t - \eta \delta \vec{\theta}_t \\ \vec{\theta}_{t+1} &= \vec{\theta}_t + \vec{v}_{t+1} \end{align}$$
Where $\eta$ is the learning rate, $\vec{\theta}$ is a model parameter, $\delta \vec{\theta}_t=\pderiv{L}{\vec{\theta}}(\vec{\theta}_t)$ is the gradient of the loss w.r.t. to the parameter and $0\leq\mu<1$ is a hyperparameter known as momentum.
Expanding the update rule recursively shows us now the parameter update infact depends on all previous gradient values for that parameter, where the old gradients are exponentially decayed by a factor of $\mu$ at each timestep.
Since we're incorporating previous gradient (update directions), a noisy value of the current gradient will have less effect so that the general direction of previous updates is maintained somewhat. The following figure illustrates this.

TODO:
- Complete the implementation of the
MomentumSGDclass in thehw2/optimizers.pymodule. - Tweak the learning rate for momentum in
part2_optim_hp()the function in thehw2/answers.pymodule. - Run the following code block to compare to the vanilla SGD.
fig_optim = train_with_optimizer('momentum', optimizers.MomentumSGD, fig_optim)
fig_optim
--- EPOCH 1/10 ---
train_batch: 0%| | 0/100 [00:00<?, ?it/s]
test_batch: 0%| | 0/100 [00:00<?, ?it/s]
--- EPOCH 2/10 ---
train_batch: 0%| | 0/100 [00:00<?, ?it/s]
test_batch: 0%| | 0/100 [00:00<?, ?it/s]
--- EPOCH 3/10 ---
train_batch: 0%| | 0/100 [00:00<?, ?it/s]
test_batch: 0%| | 0/100 [00:00<?, ?it/s]
--- EPOCH 4/10 ---
train_batch: 0%| | 0/100 [00:00<?, ?it/s]
test_batch: 0%| | 0/100 [00:00<?, ?it/s]
--- EPOCH 5/10 ---
train_batch: 0%| | 0/100 [00:00<?, ?it/s]
test_batch: 0%| | 0/100 [00:00<?, ?it/s]
--- EPOCH 6/10 ---
train_batch: 0%| | 0/100 [00:00<?, ?it/s]
test_batch: 0%| | 0/100 [00:00<?, ?it/s]
--- EPOCH 7/10 ---
train_batch: 0%| | 0/100 [00:00<?, ?it/s]
test_batch: 0%| | 0/100 [00:00<?, ?it/s]
--- EPOCH 8/10 ---
train_batch: 0%| | 0/100 [00:00<?, ?it/s]
test_batch: 0%| | 0/100 [00:00<?, ?it/s]
--- EPOCH 9/10 ---
train_batch: 0%| | 0/100 [00:00<?, ?it/s]
test_batch: 0%| | 0/100 [00:00<?, ?it/s]
--- EPOCH 10/10 ---
train_batch: 0%| | 0/100 [00:00<?, ?it/s]
test_batch: 0%| | 0/100 [00:00<?, ?it/s]
Bonus: RMSProp¶
This is another optmizer that accounts for previous gradients, but this time it uses them to adapt the learning rate per parameter.
RMSProp maintains a decaying moving average of previous squared gradients, $$ \vec{r}_{t+1} = \gamma\vec{r}_{t} + (1-\gamma)\delta\vec{\theta}_t^2 $$ where $0<\gamma<1$ is a decay constant usually set close to $1$, and $\delta\vec{\theta}_t^2$ denotes element-wise squaring.
The update rule for each parameter is then, $$ \vec{\theta}_{t+1} = \vec{\theta}_t - \left( \frac{\eta}{\sqrt{r_{t+1}+\varepsilon}} \right) \delta\vec{\theta}_t $$
where $\varepsilon$ is a small constant to prevent numerical instability. The idea here is to decrease the learning rate for parameters with high gradient values and vice-versa. The decaying moving average prevents accumulating all the past gradients which would cause the effective learning rate to become zero.
Bonus:
- Complete the implementation of the
RMSPropclass in thehw2/optimizers.pymodule. - Tweak the learning rate for RMSProp in
part2_optim_hp()the function in thehw2/answers.pymodule. - Run the following code block to compare to the other optimizers.
fig_optim = train_with_optimizer('rmsprop', optimizers.RMSProp, fig_optim)
fig_optim
--- EPOCH 1/10 ---
train_batch: 0%| | 0/100 [00:00<?, ?it/s]
test_batch: 0%| | 0/100 [00:00<?, ?it/s]
--- EPOCH 2/10 ---
train_batch: 0%| | 0/100 [00:00<?, ?it/s]
test_batch: 0%| | 0/100 [00:00<?, ?it/s]
--- EPOCH 3/10 ---
train_batch: 0%| | 0/100 [00:00<?, ?it/s]
test_batch: 0%| | 0/100 [00:00<?, ?it/s]
--- EPOCH 4/10 ---
train_batch: 0%| | 0/100 [00:00<?, ?it/s]
test_batch: 0%| | 0/100 [00:00<?, ?it/s]
--- EPOCH 5/10 ---
train_batch: 0%| | 0/100 [00:00<?, ?it/s]
test_batch: 0%| | 0/100 [00:00<?, ?it/s]
--- EPOCH 6/10 ---
train_batch: 0%| | 0/100 [00:00<?, ?it/s]
test_batch: 0%| | 0/100 [00:00<?, ?it/s]
--- EPOCH 7/10 ---
train_batch: 0%| | 0/100 [00:00<?, ?it/s]
test_batch: 0%| | 0/100 [00:00<?, ?it/s]
--- EPOCH 8/10 ---
train_batch: 0%| | 0/100 [00:00<?, ?it/s]
test_batch: 0%| | 0/100 [00:00<?, ?it/s]
--- EPOCH 9/10 ---
train_batch: 0%| | 0/100 [00:00<?, ?it/s]
test_batch: 0%| | 0/100 [00:00<?, ?it/s]
--- EPOCH 10/10 ---
train_batch: 0%| | 0/100 [00:00<?, ?it/s]
test_batch: 0%| | 0/100 [00:00<?, ?it/s]
Note that you should get better train/test accuracy with Momentum and RMSProp than Vanilla.
Dropout Regularization¶
Dropout is a useful technique to improve generalization of deep models.
The idea is simple: during the forward pass drop, i.e. set to to zero, the activation of each neuron, with a probability of $p$. For example, if $p=0.4$ this means we drop the activations of 40% of the neurons (on average).
There are a few important things to note about dropout:
- It is only performed during training. When testing our model the dropout layers should be a no-op.
- In the backward pass, gradients are only propagated back into neurons that weren't dropped during the forward pass.
- During testing, the activations must be scaled since the expected value of each neuron during the training phase is now $1-p$ times it's original expectation. Thus, we need to scale the test-time activations by $1-p$ to match. Equivalently, we can scale the train time activations by $1/(1-p)$.
TODO:
- Complete the implementation of the
Dropoutclass in thehw2/layers.pymodule. - Finish the implementation of the
MLP's__init__()method in thehw2/layers.pymodule. Ifdropout>0you should add aDropoutlayer after eachReLU.
from hw2.grad_compare import compare_layer_to_torch
# Check architecture of MLP with dropout layers
mlp_dropout = layers.MLP(in_features, num_classes, [50]*3, dropout=0.6)
print(mlp_dropout)
test.assertEqual(len(mlp_dropout.sequence), 10)
for b1, b2 in zip(mlp_dropout.sequence, mlp_dropout.sequence[1:]):
if str(b1).lower() == 'relu':
test.assertTrue(str(b2).startswith('Dropout'))
test.assertTrue(str(mlp_dropout.sequence[-1]).startswith('Linear'))
MLP, Sequential [0] Linear(self.in_features=3072, self.out_features=50) [1] ReLU [2] Dropout(p=0.6) [3] Linear(self.in_features=50, self.out_features=50) [4] ReLU [5] Dropout(p=0.6) [6] Linear(self.in_features=50, self.out_features=50) [7] ReLU [8] Dropout(p=0.6) [9] Linear(self.in_features=50, self.out_features=10)
# Test end-to-end gradient in train and test modes.
print('Dropout, train mode')
mlp_dropout.train(True)
for diff in compare_layer_to_torch(mlp_dropout, torch.randn(500, in_features)):
test.assertLess(diff, 1e-3)
print('Dropout, test mode')
mlp_dropout.train(False)
for diff in compare_layer_to_torch(mlp_dropout, torch.randn(500, in_features)):
test.assertLess(diff, 1e-3)
Dropout, train mode Comparing gradients... input diff=0.000 param#01 diff=0.000 param#02 diff=0.000 param#03 diff=0.000 param#04 diff=0.000 param#05 diff=0.000 param#06 diff=0.000 param#07 diff=0.000 param#08 diff=0.000 Dropout, test mode Comparing gradients... input diff=0.000 param#01 diff=0.000 param#02 diff=0.000 param#03 diff=0.000 param#04 diff=0.000 param#05 diff=0.000 param#06 diff=0.000 param#07 diff=0.000 param#08 diff=0.000
To see whether dropout really improves generalization, let's take a small training set (small enough to overfit) and a large test set and check whether we get less overfitting and perhaps improved test-set accuracy when using dropout.
# Define a small set from CIFAR-10, but take a larger test set since we want to test generalization
batch_size = 10
max_batches = 40
in_features = 3*32*32
num_classes = 10
dl_train = torch.utils.data.DataLoader(ds_train, batch_size, shuffle=False)
dl_test = torch.utils.data.DataLoader(ds_test, batch_size*2, shuffle=False)
TODO:
Tweak the hyperparameters for this section in the part2_dropout_hp() function in the hw2/answers.py module. Try to set them so that the first model (with dropout=0) overfits. You can disable the other dropout options until you tune the hyperparameters. We can then see the effect of dropout for generalization.
import hw2.answers as answers
# Get hyperparameters
hp = answers.part2_dropout_hp()
hidden_features = [400] * 1
num_epochs = 30
hp
{'wstd': 0.001, 'lr': 0.0015}
torch.manual_seed(seed)
fig=None
#for dropout in [0]: # Use this for tuning the hyperparms until you overfit
for dropout in [0,0.4,0.8]:
model = layers.MLP(in_features, num_classes, hidden_features, wstd=hp['wstd'], dropout=dropout)
loss_fn = layers.CrossEntropyLoss()
optimizer = optimizers.MomentumSGD(model.params(), learn_rate=hp['lr'], reg=0)
print('*** Training with dropout=', dropout)
trainer = training.LayerTrainer(model, loss_fn, optimizer)
fit_res_dropout = trainer.fit(dl_train, dl_test, num_epochs, max_batches=max_batches, print_every=6)
fig, axes = plot_fit(fit_res_dropout, fig=fig, legend=f'dropout={dropout}', log_loss=True)
*** Training with dropout= 0 --- EPOCH 1/30 ---
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
--- EPOCH 7/30 ---
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
--- EPOCH 13/30 ---
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
--- EPOCH 19/30 ---
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
--- EPOCH 25/30 ---
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
--- EPOCH 30/30 ---
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
*** Training with dropout= 0.4 --- EPOCH 1/30 ---
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
--- EPOCH 7/30 ---
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
--- EPOCH 13/30 ---
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
--- EPOCH 19/30 ---
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
--- EPOCH 25/30 ---
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
--- EPOCH 30/30 ---
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
*** Training with dropout= 0.8 --- EPOCH 1/30 ---
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
--- EPOCH 7/30 ---
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
--- EPOCH 13/30 ---
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
--- EPOCH 19/30 ---
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
--- EPOCH 25/30 ---
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
--- EPOCH 30/30 ---
train_batch: 0%| | 0/40 [00:00<?, ?it/s]
test_batch: 0%| | 0/40 [00:00<?, ?it/s]
Questions¶
TODO Answer the following questions. Write your answers in the appropriate variables in the module hw2/answers.py.
from cs236781.answers import display_answer
import hw2.answers
Question 1¶
Regarding the graphs you got for the three dropout configurations:
Explain the graphs of no-dropout vs dropout. Do they match what you expected to see?
- If yes, explain why and provide examples based on the graphs.
- If no, explain what you think the problem is and what should be modified to fix it.
Compare the low-dropout setting to the high-dropout setting and explain based on your graphs.
display_answer(hw2.answers.part2_q1)
Q1
A. Comparison of Graphs with Dropout and No-Dropout:
Regarding the graphs with dropout and no-dropout, the results align with our expectations. As anticipated, the highest training accuracy is observed for the no-dropout variant due to its strong overfitting to the training data. This also explains the lower training loss in the no-dropout variant. However, when examining the test accuracy, we observe that the no-dropout variant achieves a lower accuracy compared to the 0.4 dropout variant. This indicates that the 0.4 dropout variant is less overfitted to the training data and exhibits better generalization than the no-dropout variant.
B. Comparison of Graphs with 0.4 Dropout and 0.8 Dropout:
For the graphs comparing 0.4 dropout and 0.8 dropout, we observe that the training loss is lower with 0.4 dropout. This is expected, as fewer neurons and connections in the 0.8 dropout configuration make it harder to effectively minimize the training loss. Training accuracy is also higher with 0.4 dropout, as more active neurons allow for a better fit to the training data. While the test losses are fairly similar, the test accuracy for the 0.4 dropout variant is substantially better. This suggests that the 0.8 dropout variant is underfitted to the data and, as a result, has poor generalization. In contrast, the 0.4 dropout variant demonstrates good generalization and superior performance.
Question 2¶
When training a model with the cross-entropy loss function, is it possible for the test loss to increase for a few epochs while the test accuracy also increases?
If it's possible explain how, if it's not explain why not.
display_answer(hw2.answers.part2_q2)
Q2
Yes, given a cross entropy loss function,it is possible for a test loss to increase while test accuracy improves during training. This loss function evaluates how the predictions are confident in their correctness, giving penalties to less confident decisions. but accuracy measures only the correct predictions without considering confidence. for example the model can correctly classify more samples that results in increased accuracy but with lower confidence, leading to higher loss. There are few things that may cause a situation like this: noisy or complex data and dropout. This often temporary and goes away as we keep training the model.
Question 3¶
Explain the difference between gradient descent and back-propagation.
Compare in detail between gradient descent (GD) and stochastic gradient descent (SGD).
Why is SGD used more often in the practice of deep learning? Provide a few justifications.
You would like to try GD to train your model instead of SGD, but you're concerned that your dataset won't fit in memory. A friend suggested that you should split the data into disjoint batches, do multiple forward passes until all data is exhausted, and then do one backward pass on the sum of the losses.
- Would this approach produce a gradient equivalent to GD? Why or why not? provide mathematical justification for your answer.
- You implemented the suggested approach, and were careful to use batch sizes small enough so that each batch fits in memory. However, after some number of batches you got an out of memory error. What happened?
display_answer(hw2.answers.part2_q3)
Q3
In gradient descent we update model parameters to minimize a loss function by moving in the direction of the steepest descent. In backpropagation we calculate these gradients efficiently using the chain rule. It uses a forward pass to compute the output and loss, and then a backward pass to determine how the loss is affected by each parameter. In short gradient descent updates weights, backpropagation focuses on computing the gradients needed for the weight updates. When discussing neural networks, backpropagation computes gradients, and gradient descent uses them to optimize the model. Gradient descent is a general algorithm, while backpropagation is used in neural networks.
GD and SGD are both optimization techniques that minimize a loss function by updating model parameters. The main difference between them is how they compute the gradient: GD calculates the gradient using the entire dataset, and SGD computes it using a single data point at every time.
GD is expensive computationally for large datasets because every iteration has to calculate this dataset. Making it slower for massive data but with slow and steady convergence as the calculation are precise because they are performed on the whole database. GD is good for small or medium-sized datasets or convex optimization problems where convergence is fast.
SGD is faster because it processes only one data point at a time which is selected randomly. This random selection helps the algorithm avoid getting stuck in local minimum points and ensure that the updates are not biased toward some small dataset. The convergence will be with bigger noise, which can make progress slower but also can help to escape local minimum point.
In summary, GD offers precise and stable smaller datasets, while SGD provides faster solution that better fits huge datasets.
SGD is used more often in deep learning because it is efficient on large dataset . Traditional GD, requires processing the entire dataset at once. SGD updates parameters using a single data point at each time, making it memory efficient for massive datasets. This parameter makes the convergence faster. The randomness in SGD can help it to escape local minimum points, which is beneficial in the complex high-dimensional menifolds of neural networks. Additionally, SGD can be easily adapted with variants like Mini-batch SGD, Momentum and RMSprop to stabilize learning and improve performance. These characteristics, along with SGD's ability to generalize better and prevent overfitting, make it the preferred optimization method for deep learning.
A.
The friend’s suggested approach would not produce a gradient equivalent to GD. In GD the gradient is computed over the entire dataset, averaging the gradients for each data point. In contrast, the friend’s method involves splitting the data into batches, computing the loss for each batch, summing these losses, and then performing a single backward pass on the sum of the batch losses. This approach leads to a gradient that is not averaged over the entire dataset, but rather computed on the summed batch-wise losses, resulting in a scaled gradient. The key difference is that GD computes:
$\nabla L(\theta) = \frac{1}{N} \sum_{i=1}^{N} \nabla l(\theta; x_i, y_i)$
while the friends approach computes:
$\nabla L_{\text{total}}(\theta) = \sum_{j=1}^{K} \sum_{(x_i, y_i) \in B_j} \nabla l(\theta; x_i, y_i)$
where $N$ is the total number of samples, $K$ is the number of batches, and $B_j$ represents the data in batch $j$. The difference arises because the gradient is summed, not averaged, leading to a larger step size.
However, if we average the gradients across all batches, such that we include all the data and then divide by the total number of samples $N$, then the friend's method would be equivalent to GD.
B.
The out-of-memory error likely occurred because the memory usage accumulated over multiple batches. This can happen if intermediate results, such as activations, gradients, or other data structures, are not properly cleared or released after each batch is processed. This can lead to increased memory consumption with each batch, eventually exceeding available memory. Additionally, large temporary variables required for forward and backward passes in deep learning models can further contribute to this issue. It can be solved by managing memory efficiently by clearing intermediate result and using memory-saving techniques like gradient accumulation.
Question 4 (Automatic Differentiation)¶
Let $f = f_n \circ f_{n-1} \circ ... \circ f_1$ where each $f_i: \mathbb{R} \rightarrow \mathbb{R}$ is a differentiable function which is easy to evaluate and differentiate (each query costs $\mathcal{O}(1)$ at a given point).
- In this exercise you will reduce the memory complexity for evaluating $\nabla f (x_0)$ at some point $x_0$.
Assume that you are given with $f$ already expressed as a computational graph and a point $x_0$.
- Show how to reduce the memory complexity for computing the gradient using forward mode AD (maintaining the $\mathcal{O}(n)$ computation cost). What is the memory complexity?
- Show how to reduce the memory complexity for computing the gradient using backward mode AD (maintaining the $\mathcal{O}(n)$ computation cost). What is the memory complexity?
- Can these techniques be generalized for arbitrary computational graphs?
- Think how the backprop algorithm can benefit from these techniques when applied to deep architectures (e.g VGGs, ResNets).
display_answer(hw2.answers.part2_q4)
Your answer: Q4
1.A.
Forward mode automatic diff computes derivatives alongside function evaluations by traversing the computational graph in a forward direction. For a composite function $f$, it calculates both the function values and derivatives at each step. As we have $n$ such function holding these values in memory would take $O(N)$ memory complexity.
To Reduce memory complexity we can discarding Intermediate Values. Instead of storing all intermediate results, only the current and previous values are retained, reducing memory usage to $O(1)$. A tradeoff between memory and computation can be achieved by storing only key values and recomputing others as needed.
B.
we can use a technique called Checkpointing to reduce memory using in backward mode automatic diff by strategically storing intermediate values during the forward pass and recomputing them as needed during the backward pass. Instead of saving all n intermediate values, the computational graph is divided to $\sqrt(n)$ segments, and only the values at the boundaries of these segments are stored. This reduces memory complexity from $O(n)$ to $O(\sqrt(n))$.
Doing the backward pass, any intermediate values required for gradient computations are recomputed by traversing the graph within their respective segments. Since each segment contains $\sqrt(n)$, steps.
memory reduction techniques such as checkpointing can be generalized to arbitrary computational graphs by strategically storing intermediate values and recomputing others as needed. In forward mode AD, this involves discarding and recomputing values in a forward traversal, while in backward mode AD, checkpoints are placed at key nodes, and subgraphs are recomputed during the backward pass. They can effective for complex graphs with loops, branches, or parallel paths.
The backpropagation algorithm can benefit from memory reduction techniques, like checkpointing, when applied to deep architectures such as VGGs and ResNets. These architectures have numerous layers, often requiring a large amount of memory to store intermediate activations during the forward pass. Without optimization, this memory demand limits the batch size or the maximum network depth that can be trained on hardware with limited memory.
By applying checkpointing, only selected intermediate activations are stored during the forward pass, while others are recomputed as needed during the backward pass. This reduces memory usage allowing deeper networks or larger batch sizes to be trained without exceeding memory limits. The trade off is additional computation for recomputation, which is often manageable compared to the gains in memory efficiency.
$$ \newcommand{\mat}[1]{\boldsymbol {#1}} \newcommand{\mattr}[1]{\boldsymbol {#1}^\top} \newcommand{\matinv}[1]{\boldsymbol {#1}^{-1}} \newcommand{\vec}[1]{\boldsymbol {#1}} \newcommand{\vectr}[1]{\boldsymbol {#1}^\top} \newcommand{\rvar}[1]{\mathrm {#1}} \newcommand{\rvec}[1]{\boldsymbol{\mathrm{#1}}} \newcommand{\diag}{\mathop{\mathrm {diag}}} \newcommand{\set}[1]{\mathbb {#1}} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \newcommand{\pderiv}[2]{\frac{\partial #1}{\partial #2}} \newcommand{\bb}[1]{\boldsymbol{#1}} $$
Part 3: Binary Classification with Multilayer Perceptrons¶
In this part we'llonly answer questions. We hope this HW reduction help you to finish on time :)
from cs236781.answers import display_answer
import hw2.answers
Question 1¶
Please explain the following errors and how to reduce them using the tools we learned in class. Use terms such as "receptive field" "population loss" etc.
- High Optimization error?
- High Generalization error?
- High Approximation error?
display_answer(hw2.answers.part3_q1)
//-----------------------------//
A. High optimization error:
High optimization error is when the model struggles to minimize the loss function on the training set. This indicates that the model is unable to effectively fit the data it is trained on (Underfitting).
Possible causes can include:
A model with low number of parameters.
Choice of optimization algorithm or optimizer parameters.
Vanishing/exploding gradients.
Too low or too large of a receptive field: The receptive field is the area of input data the model can "observe" or "consider" at a time. A bad receptive field can prevent the model from capturing complex patterns or capture too complex patterns.
Optimizing any of the mentioned above would help with the High optimization error. All are pretty obvious except maybe the vanishing and exploding gradients, in that case we can use gradient norm clamping between to values a and band optimize these parameters.
//-----------------------------//
B. High generalization error:
Correlated with overfitting, a high generalization error occurs when the model performs well on the training set but not that good onthe test set. Effectively the model learned the noise or specific patterns of the training data as is rather than general features of the data set.
Possible causes may and can include:
High model complexity: "too many" parameters can cause overfitting.
Training data represents the general dataset badly (Bad sampling, different distributions at training and test times).
Lack of regularization and early stopping: supposed to reduce or at least constrain model complexity.
Possible fixes:
Lowering model complexity. (Less parameters allowed, for example a narrower or shallower network if using networks).
Fix sampling, or using some kind of normalization , maybe even a learnable weighting between distributions at train and test time.
Well, add regularizations like L2, L1 or even elastic net.
//-----------------------------//
C. High Approximation Error
High approximation error happens when the model is not expressive enough to capture the underlying complexity of the data. Where the model is too simple to represent the data effectively.
Possible causes can include:
Too low model expressive capacity/parameters.
Bad feature selection or maybe preprocessing of the input data.
Using a non fitting hypothesis class regardless of expressive power (like using a linear model for a non-linear problem).
A helpful principle would be optimizing "Population Loss", Focusing on how effectively the model captures the general data distribution rather than individual samples within said data, using techniques like batch training and ensemble methods.
Question 2¶
Consider a binary classifier.
Describe a case where you expect false positive rate (FPR) to be higher and one that false negative rate (FNR) to be higher.
display_answer(hw2.answers.part3_q2)
Your answer:
- An example for a case that favours or doesn't care so much about higher false positive rate can be spam email detection:
In spam mail detection, the system is focused on minimizing missed spam and can becomes overly sensitive, leading to a higher fpr. Normal mails may be falsefully flagged as spams and moved to spam folders and such. Things like this can cause users to miss important mails. Finding a specific balance is important to avoid excessive false positives while effectively filtering said spam at the same time.
- An example for a case that favours higher false negative rate can be credit card scam detection:
Given the case of credit card fraud detection, a higher false negative rate is when the system fails to identify scams as fraud which we don't want at all. This often happens when the model is designed to avoid annoying customers by wrongly or falsefuly classifying normal and real transactions trying to minimize false positives. Therefore, some actual malicious deals go uder the radar, allowing scammers to continue scamming. For instance, a system might ignore small or unusual purchases that match regular clients/customers behaviors, missing signs of scams/frauds since they aren't significant enough.
Question 3¶
You're training a binary classifier screening of a large cohort of patients for some disease, with the aim to detect the disease early, before any symptoms appear. You train the model on easy-to-obtain features, so screening each individual patient is simple and low-cost. In case the model classifies a patient as sick, she must then be sent to furhter testing in order to confirm the illness. Assume that these further tests are expensive and involve high-risk to the patient. Assume also that once diagnosed, a low-cost treatment exists.
You wish to screen as many people as possible at the lowest possible cost and loss of life. Would you still choose the same "optimal" point on the ROC curve as above? If not, how would you choose it? Answer these questions for two possible scenarios:
- A person with the disease will develop non-lethal symptoms that immediately confirm the diagnosis and can then be treated.
- A person with the disease shows no clear symptoms and may die with high probability if not diagnosed early enough, either by your model or by the expensive test.
Explain your answers.
display_answer(hw2.answers.part3_q3)
Your answer:
The choice of the "optimal" point on a ROC curve depends on the costs and risks correlated with false positives and false negatives in each scenario.
In the first scenario, where a person with the disease will eventually develop non-lethal symptoms that confirm the diagnosis, the primary concern is minimizing unnecessary costs and risks from false positives. False negatives are less crucial since the disease will eventually be identified and treated. In this case, the optimal point on the ROC curve should prioritize minimal FPR while maintaining balanced sensitivity.
On the other hand, the second scenario includes a high probability of death if the disease is not found on the patient as early as possible. Miss classifying could be very bad, making false negatives is critical. In such a case, the best point on a ROC curve should prioritize minimizing the false negative rate, even if it results in a higher FPR. This can involve using a lower threshold to make sure that most patients with the disease are classified correct. Still this will increase the number of false positives, the associated costs and risks are outweighed by the potential saved lives.
Question 4¶
Explain why MLP would not be best to train over sequntial data. Consider text where each data point is one word and you want to classify the sentiment of a sentence.
display_answer(hw2.answers.part3_q4)
Your answer:
MLPs are not well suited for sequential data, like text, because they do not capture the order or contextual relationships between elements in a sequence. They treat inputs as independent features, ignoring dependencies such as negations that are important for understanding sentiment. In addition, mlps require inputs of fixed sizes and dimensions, which can cause information loss through truncating the inputs or noise from padding said inputs. They also lack the ability to effectively represent words in a certain context, making them not so good for natural language tasks. Models similar to RNNs, LSTMs are better options. We saw that they are better for tasks like sentiment analyse.
$$ \newcommand{\mat}[1]{\boldsymbol {#1}} \newcommand{\mattr}[1]{\boldsymbol {#1}^\top} \newcommand{\matinv}[1]{\boldsymbol {#1}^{-1}} \newcommand{\vec}[1]{\boldsymbol {#1}} \newcommand{\vectr}[1]{\boldsymbol {#1}^\top} \newcommand{\rvar}[1]{\mathrm {#1}} \newcommand{\rvec}[1]{\boldsymbol{\mathrm{#1}}} \newcommand{\diag}{\mathop{\mathrm {diag}}} \newcommand{\set}[1]{\mathbb {#1}} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \newcommand{\pderiv}[2]{\frac{\partial #1}{\partial #2}} \newcommand{\bb}[1]{\boldsymbol{#1}} $$
Part 4: Convolutional Neural Networks¶
In this part we will explore convolution networks. We'll implement a common block-based deep CNN pattern with an without residual connections.
import os
import re
import sys
import glob
import numpy as np
import matplotlib.pyplot as plt
import unittest
import torch
import torchvision
import torchvision.transforms as tvtf
%matplotlib inline
%load_ext autoreload
%autoreload 2
seed = 42
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
plt.rcParams.update({'font.size': 12})
test = unittest.TestCase()
Reminder: Convolutional layers and networks¶
Convolutional layers are the most essential building blocks of the state of the art deep learning image classification models and also play an important role in many other tasks. As we saw in the tutorial, when applied to images, convolutional layers operate on and produce volumes (3D tensors) of activations.
A convenient way to interpret convolutional layers for images is as a collection of 3D learnable filters, each of which operates on a small spatial region of the input volume. Each filter is convolved with the input volume ("slides over it"), and a dot product is computed at each location followed by a non-linearity which produces one activation. All these activations produce a 2D plane known as a feature map. Multiple feature maps (one for each filter) comprise the output volume.

A crucial property of convolutional layers is their translation equivariance, i.e. shifting the input results in and equivalently shifted output. This produces the ability to detect features regardless of their spatial location in the input.
Convolutional network architectures usually follow a pattern basic repeating blocks: one or more convolution layers, each followed by a non-linearity (generally ReLU) and then a pooling layer to reduce spatial dimensions. Usually, the number of convolutional filters increases the deeper they are in the network. These layers are meant to extract features from the input. Then, one or more fully-connected layers is used to combine the extracted features into the required number of output class scores.
Building convolutional networks with PyTorch¶
PyTorch provides all the basic building blocks needed for creating a convolutional arcitecture within the torch.nn package.
Let's use them to create a basic convolutional network with the following architecture pattern:
[(CONV -> ACT)*P -> POOL]*(N/P) -> (FC -> ACT)*M -> FC
Here $N$ is the total number of convolutional layers, $P$ specifies how many convolutions to perform before each pooling layer and $M$ specifies the number of hidden fully-connected layers before the final output layer.
TODO: Complete the implementaion of the CNN class in the hw2/cnn.py module.
Use PyTorch's nn.Conv2d and nn.MaxPool2d for the convolution and pooling layers.
It's recommended to implement the missing functionality in the order of the class' methods.
from hw2.cnn import CNN
test_params = [
dict(
in_size=(3,100,100), out_classes=10,
channels=[32]*4, pool_every=2, hidden_dims=[100]*2,
conv_params=dict(kernel_size=3, stride=1, padding=1),
activation_type='relu', activation_params=dict(),
pooling_type='max', pooling_params=dict(kernel_size=2),
),
dict(
in_size=(3,100,100), out_classes=10,
channels=[32]*4, pool_every=2, hidden_dims=[100]*2,
conv_params=dict(kernel_size=5, stride=2, padding=3),
activation_type='lrelu', activation_params=dict(negative_slope=0.05),
pooling_type='avg', pooling_params=dict(kernel_size=3),
),
dict(
in_size=(3,100,100), out_classes=3,
channels=[16]*5, pool_every=3, hidden_dims=[100]*1,
conv_params=dict(kernel_size=2, stride=2, padding=2),
activation_type='lrelu', activation_params=dict(negative_slope=0.1),
pooling_type='max', pooling_params=dict(kernel_size=2),
),
]
for i, params in enumerate(test_params):
torch.manual_seed(seed)
net = CNN(**params)
print(f"\n=== test {i=} ===")
print(net)
torch.manual_seed(seed)
test_out = net(torch.ones(1, 3, 100, 100))
print(f'{test_out=}')
expected_out = torch.load(f'tests/assets/expected_conv_out_{i:02d}.pt')
print(f'max_diff={torch.max(torch.abs(test_out - expected_out)).item()}')
test.assertTrue(torch.allclose(test_out, expected_out, atol=1e-3))
=== test i=0 ===
CNN(
(feature_extractor): Sequential(
(0): Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU()
(2): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): ReLU()
(4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(5): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(6): ReLU()
(7): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(8): ReLU()
(9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(mlp): MLP(
(layers): Sequential(
(0): Linear(in_features=20000, out_features=100, bias=True)
(1): ReLU()
(2): Linear(in_features=100, out_features=100, bias=True)
(3): ReLU()
(4): Linear(in_features=100, out_features=10, bias=True)
(5): Identity()
)
)
)
test_out=tensor([[ 0.0745, -0.1058, 0.0928, 0.0476, 0.0057, 0.0051, 0.0938, -0.0582,
0.0573, 0.0583]], grad_fn=<AddmmBackward0>)
max_diff=7.450580596923828e-09
=== test i=1 ===
CNN(
(feature_extractor): Sequential(
(0): Conv2d(3, 32, kernel_size=(5, 5), stride=(2, 2), padding=(3, 3))
(1): LeakyReLU(negative_slope=0.05)
(2): Conv2d(32, 32, kernel_size=(5, 5), stride=(2, 2), padding=(3, 3))
(3): LeakyReLU(negative_slope=0.05)
(4): AvgPool2d(kernel_size=3, stride=3, padding=0)
(5): Conv2d(32, 32, kernel_size=(5, 5), stride=(2, 2), padding=(3, 3))
(6): LeakyReLU(negative_slope=0.05)
(7): Conv2d(32, 32, kernel_size=(5, 5), stride=(2, 2), padding=(3, 3))
(8): LeakyReLU(negative_slope=0.05)
(9): AvgPool2d(kernel_size=3, stride=3, padding=0)
)
(mlp): MLP(
(layers): Sequential(
(0): Linear(in_features=32, out_features=100, bias=True)
(1): LeakyReLU(negative_slope=0.05)
(2): Linear(in_features=100, out_features=100, bias=True)
(3): LeakyReLU(negative_slope=0.05)
(4): Linear(in_features=100, out_features=10, bias=True)
(5): Identity()
)
)
)
test_out=tensor([[ 0.0724, -0.0030, 0.0637, -0.0073, 0.0932, -0.0662, -0.0656, 0.0076,
0.0193, 0.0241]], grad_fn=<AddmmBackward0>)
max_diff=1.4901161193847656e-08
=== test i=2 ===
CNN(
(feature_extractor): Sequential(
(0): Conv2d(3, 16, kernel_size=(2, 2), stride=(2, 2), padding=(2, 2))
(1): LeakyReLU(negative_slope=0.1)
(2): Conv2d(16, 16, kernel_size=(2, 2), stride=(2, 2), padding=(2, 2))
(3): LeakyReLU(negative_slope=0.1)
(4): Conv2d(16, 16, kernel_size=(2, 2), stride=(2, 2), padding=(2, 2))
(5): LeakyReLU(negative_slope=0.1)
(6): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(7): Conv2d(16, 16, kernel_size=(2, 2), stride=(2, 2), padding=(2, 2))
(8): LeakyReLU(negative_slope=0.1)
(9): Conv2d(16, 16, kernel_size=(2, 2), stride=(2, 2), padding=(2, 2))
(10): LeakyReLU(negative_slope=0.1)
)
(mlp): MLP(
(layers): Sequential(
(0): Linear(in_features=400, out_features=100, bias=True)
(1): LeakyReLU(negative_slope=0.1)
(2): Linear(in_features=100, out_features=3, bias=True)
(3): Identity()
)
)
)
test_out=tensor([[-0.0004, -0.0094, 0.0817]], grad_fn=<AddmmBackward0>)
max_diff=4.6566128730773926e-09
As before, we'll wrap our model with a Classifier that provides the necessary functionality for calculating probability scores and obtaining class label predictions.
This time, we'll use a simple approach that simply selects the class with the highest score.
TODO: Implement the ArgMaxClassifier in the hw2/classifier.py module.
from hw2.classifier import ArgMaxClassifier
model = ArgMaxClassifier(model=CNN(**test_params[0]))
test_image = torch.randint(low=0, high=256, size=(3, 100, 100), dtype=torch.float).unsqueeze(0)
test.assertEqual(model.classify(test_image).shape, (1,))
test.assertEqual(model.predict_proba(test_image).shape, (1, 10))
test.assertAlmostEqual(torch.sum(model.predict_proba(test_image)).item(), 1.0, delta=1e-3)
Let's now load CIFAR-10 to use as our dataset.
data_dir = os.path.expanduser('~/.pytorch-datasets')
ds_train = torchvision.datasets.CIFAR10(root=data_dir, download=True, train=True, transform=tvtf.ToTensor())
ds_test = torchvision.datasets.CIFAR10(root=data_dir, download=True, train=False, transform=tvtf.ToTensor())
print(f'Train: {len(ds_train)} samples')
print(f'Test: {len(ds_test)} samples')
x0,_ = ds_train[0]
in_size = x0.shape
num_classes = 10
print('input image size =', in_size)
Files already downloaded and verified Files already downloaded and verified Train: 50000 samples Test: 10000 samples input image size = torch.Size([3, 32, 32])
Now as usual, as a sanity test let's make sure we can overfit a tiny dataset with our model. But first we need to adapt our Trainer for PyTorch models.
TODO:
- Complete the implementaion of the
ClassifierTrainerclass in thehw2/training.pymodule if you haven't done so already. - Set the optimizer hyperparameters in
part4_optim_hp(), respectively, inhw2/answers.py.
from hw2.training import ClassifierTrainer
from hw2.answers import part4_optim_hp
torch.manual_seed(seed)
# Define a tiny part of the CIFAR-10 dataset to overfit it
batch_size = 2
max_batches = 25
dl_train = torch.utils.data.DataLoader(ds_train, batch_size, shuffle=False)
# Create model, loss and optimizer instances
model = ArgMaxClassifier(
model=CNN(
in_size, num_classes, channels=[32], pool_every=1, hidden_dims=[100],
conv_params=dict(kernel_size=3, stride=1, padding=1),
pooling_params=dict(kernel_size=2),
)
)
hp_optim = part4_optim_hp()
loss_fn = hp_optim.pop('loss_fn')
optimizer = torch.optim.SGD(params=model.parameters(), **hp_optim)
# Use ClassifierTrainer to run only the training loop a few times.
trainer = ClassifierTrainer(model, loss_fn, optimizer, device)
best_acc = 0
for i in range(25):
res = trainer.train_epoch(dl_train, max_batches=max_batches, verbose=(i%5==0))
best_acc = res.accuracy if res.accuracy > best_acc else best_acc
# Test overfitting
test.assertGreaterEqual(best_acc, 90)
train_batch: 0%| | 0/25 [00:00<?, ?it/s]
train_batch: 0%| | 0/25 [00:00<?, ?it/s]
train_batch: 0%| | 0/25 [00:00<?, ?it/s]
train_batch: 0%| | 0/25 [00:00<?, ?it/s]
train_batch: 0%| | 0/25 [00:00<?, ?it/s]
Residual Networks¶
A very common addition to the basic convolutional architecture described above are shortcut connections. First proposed by He et al. (2016), this simple addition has been shown to be a crucial ingredient in order to achieve effective learning with very deep networks. Virtually all state of the art image classification models from recent years use this technique.
The idea is to add a shortcut, or skip, around every two or more convolutional layers:

On the left we see an example of a regular Residual Block, that takes a 64 channel input, and performs two 3X3 convolutions , which are added to the original input.
On the right we see an exapmle of a Bottleneck Residual Block, that takes a 256 channel input, projects it to a 64 channel tensor with a 1X1 convolution, then performs an inner 3X3 convolution, followd by another 1X1 projection convolution back to the original numer of channels, 256. The output is then added to the original input.
Overall, we can denote the structure of the bottleneck channels in the given example as 256->64->64->256, where the first and last arrows denote the 1X1 convolutions, and the middle arrow is the inner convolution. Note that the 1X1 convolution with the default parameters (in pytorch) is defined such that the only dimension of the tensor that changes is the number of channels.
This adds an easy way for the network to learn identity mappings: set the weight values to be very small. The outcome is that the convolutional layers learn a residual mapping, i.e. some delta that is applied to the identity map, instead of actually learning a completely new mapping from scratch.
Lets start by implementing a general residual block, representing a structure similar to the above diagrams. Our residual block will be composed of:
- A "main path" with some number of convolutional layers with ReLU between them. Optionally, we'll also apply dropout and batch normalization layers (in this order) between the convolutions, before the ReLU.
- A "shortcut path" implementing an identity mapping around the main path. In case of a different number of input/output channels, the shortcut path should contain an additional
1x1convolution to project the channel dimension. - The sum of the main and shortcut paths output is passed though a ReLU and returned.
TODO: Complete the implementation of the ResidualBlock's __init__() method in the hw2/cnn.py module.
from hw2.cnn import ResidualBlock
torch.manual_seed(seed)
resblock = ResidualBlock(
in_channels=3, channels=[6, 4]*2, kernel_sizes=[3, 5]*2,
batchnorm=True, dropout=0.2
)
print(resblock)
torch.manual_seed(seed)
test_out = resblock(torch.ones(1, 3, 32, 32))
print(f'{test_out.shape=}')
expected_out = torch.load('tests/assets/expected_resblock_out.pt')
print(f'max_diff={torch.max(torch.abs(test_out - expected_out)).item()}')
test.assertTrue(torch.allclose(test_out, expected_out, atol=1e-3))
ResidualBlock(
(relu): ReLU()
(main_path): Sequential(
(0): Conv2d(3, 6, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): Dropout2d(p=0.2, inplace=False)
(2): BatchNorm2d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(3): ReLU()
(4): Conv2d(6, 4, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
(5): Dropout2d(p=0.2, inplace=False)
(6): BatchNorm2d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(7): ReLU()
(8): Conv2d(4, 6, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(9): Dropout2d(p=0.2, inplace=False)
(10): BatchNorm2d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(11): ReLU()
(12): Conv2d(6, 4, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
)
(shortcut_path): Sequential(
(0): Conv2d(3, 4, kernel_size=(1, 1), stride=(1, 1), bias=False)
)
)
test_out.shape=torch.Size([1, 4, 32, 32])
max_diff=3.5762786865234375e-07
Bottleneck Blocks¶
In the ResNet Block diagram shown above, the right block is called a bottleneck block. This type of block is mainly used deep in the network, where the feature space becomes increasingly high-dimensional (i.e. there are many channels).
Instead of applying a KxK conv layer on the original input channels, a bottleneck block first projects to a lower number of features (channels), applies the KxK conv on the result, and then projects back to the original feature space. Both projections are performed with 1x1 convolutions.
TODO: Complete the implementation of the ResidualBottleneckBlock in the hw2/cnn.py module.
from hw2.cnn import ResidualBottleneckBlock
torch.manual_seed(seed)
resblock_bn = ResidualBottleneckBlock(
in_out_channels=256, inner_channels=[64, 32, 64], inner_kernel_sizes=[3, 5, 3],
batchnorm=False, dropout=0.1, activation_type="lrelu"
)
print(resblock_bn)
# Test a forward pass
torch.manual_seed(seed)
test_in = torch.ones(1, 256, 32, 32)
test_out = resblock_bn(test_in)
print(f'{test_out.shape=}')
assert test_out.shape == test_in.shape
expected_out = torch.load('tests/assets/expected_resblock_bn_out.pt')
print(f'max_diff={torch.max(torch.abs(test_out - expected_out)).item()}')
test.assertTrue(torch.allclose(test_out, expected_out, atol=1e-3))
ResidualBottleneckBlock(
(relu): ReLU()
(main_path): Sequential(
(0): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1))
(1): Dropout2d(p=0.1, inplace=False)
(2): LeakyReLU(negative_slope=0.01)
(3): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(4): Dropout2d(p=0.1, inplace=False)
(5): LeakyReLU(negative_slope=0.01)
(6): Conv2d(64, 32, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
(7): Dropout2d(p=0.1, inplace=False)
(8): LeakyReLU(negative_slope=0.01)
(9): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(10): Dropout2d(p=0.1, inplace=False)
(11): LeakyReLU(negative_slope=0.01)
(12): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1))
)
(shortcut_path): Identity()
)
test_out.shape=torch.Size([1, 256, 32, 32])
max_diff=1.1920928955078125e-07
Now, based on the ResidualBlock, we'll implement our own variation of a residual network (ResNet),
with the following architecture:
[-> (CONV -> ACT)*P -> POOL]*(N/P) -> (FC -> ACT)*M -> FC
\------- SKIP ------/
Note that $N$, $P$ and $M$ are as before, however now $P$ also controls the number of convolutional layers to add a skip-connection to.
TODO: Complete the implementation of the ResNet class in the hw2/cnn.py module.
You must use your ResidualBlocks or ResidualBottleneckBlocks to group together every $P$ convolutional layers.
from hw2.cnn import ResNet
test_params = [
dict(
in_size=(3,100,100), out_classes=10, channels=[32, 64]*3,
pool_every=4, hidden_dims=[100]*2,
activation_type='lrelu', activation_params=dict(negative_slope=0.01),
pooling_type='avg', pooling_params=dict(kernel_size=2),
batchnorm=True, dropout=0.1,
bottleneck=False
),
dict(
# create 64->16->64 bottlenecks
in_size=(3,100,100), out_classes=5, channels=[64, 16, 64]*4,
pool_every=3, hidden_dims=[64]*1,
activation_type='tanh',
pooling_type='max', pooling_params=dict(kernel_size=2),
batchnorm=True, dropout=0.1,
bottleneck=True
)
]
for i, params in enumerate(test_params):
torch.manual_seed(seed)
net = ResNet(**params)
print(f"\n=== test {i=} ===")
print(net)
torch.manual_seed(seed)
test_out = net(torch.ones(1, 3, 100, 100))
print(f'{test_out=}')
expected_out = torch.load(f'tests/assets/expected_resnet_out_{i:02d}.pt')
print(f'max_diff={torch.max(torch.abs(test_out - expected_out)).item()}')
test.assertTrue(torch.allclose(test_out, expected_out, atol=1e-3))
=== test i=0 ===
ResNet(
(feature_extractor): Sequential(
(0): ResidualBlock(
(relu): ReLU()
(main_path): Sequential(
(0): Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): Dropout2d(p=0.1, inplace=False)
(2): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(3): LeakyReLU(negative_slope=0.01)
(4): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(5): Dropout2d(p=0.1, inplace=False)
(6): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(7): LeakyReLU(negative_slope=0.01)
(8): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(9): Dropout2d(p=0.1, inplace=False)
(10): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(11): LeakyReLU(negative_slope=0.01)
(12): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
)
(shortcut_path): Sequential(
(0): Conv2d(3, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
)
)
(1): AvgPool2d(kernel_size=2, stride=2, padding=0)
(2): ResidualBlock(
(relu): ReLU()
(main_path): Sequential(
(0): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): Dropout2d(p=0.1, inplace=False)
(2): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(3): LeakyReLU(negative_slope=0.01)
(4): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
)
(shortcut_path): Identity()
)
)
(mlp): MLP(
(layers): Sequential(
(0): Linear(in_features=160000, out_features=100, bias=True)
(1): LeakyReLU(negative_slope=0.01)
(2): Linear(in_features=100, out_features=100, bias=True)
(3): LeakyReLU(negative_slope=0.01)
(4): Linear(in_features=100, out_features=10, bias=True)
(5): Identity()
)
)
)
test_out=tensor([[ 0.0422, 0.0332, 0.1870, -0.0532, -0.0742, 0.1143, -0.0617, -0.0467,
0.0852, 0.0221]], grad_fn=<AddmmBackward0>)
max_diff=1.1920928955078125e-07
=== test i=1 ===
ResNet(
(feature_extractor): Sequential(
(0): ResidualBlock(
(relu): ReLU()
(main_path): Sequential(
(0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): Dropout2d(p=0.1, inplace=False)
(2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(3): Tanh()
(4): Conv2d(64, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(5): Dropout2d(p=0.1, inplace=False)
(6): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(7): Tanh()
(8): Conv2d(16, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
)
(shortcut_path): Sequential(
(0): Conv2d(3, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
)
)
(1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(2): ResidualBottleneckBlock(
(relu): ReLU()
(main_path): Sequential(
(0): Conv2d(64, 16, kernel_size=(1, 1), stride=(1, 1))
(1): Dropout2d(p=0.1, inplace=False)
(2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(3): Tanh()
(4): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(5): Dropout2d(p=0.1, inplace=False)
(6): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(7): Tanh()
(8): Conv2d(16, 64, kernel_size=(1, 1), stride=(1, 1))
)
(shortcut_path): Identity()
)
(3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(4): ResidualBottleneckBlock(
(relu): ReLU()
(main_path): Sequential(
(0): Conv2d(64, 16, kernel_size=(1, 1), stride=(1, 1))
(1): Dropout2d(p=0.1, inplace=False)
(2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(3): Tanh()
(4): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(5): Dropout2d(p=0.1, inplace=False)
(6): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(7): Tanh()
(8): Conv2d(16, 64, kernel_size=(1, 1), stride=(1, 1))
)
(shortcut_path): Identity()
)
(5): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(6): ResidualBottleneckBlock(
(relu): ReLU()
(main_path): Sequential(
(0): Conv2d(64, 16, kernel_size=(1, 1), stride=(1, 1))
(1): Dropout2d(p=0.1, inplace=False)
(2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(3): Tanh()
(4): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(5): Dropout2d(p=0.1, inplace=False)
(6): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(7): Tanh()
(8): Conv2d(16, 64, kernel_size=(1, 1), stride=(1, 1))
)
(shortcut_path): Identity()
)
(7): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(mlp): MLP(
(layers): Sequential(
(0): Linear(in_features=2304, out_features=64, bias=True)
(1): Tanh()
(2): Linear(in_features=64, out_features=5, bias=True)
(3): Identity()
)
)
)
test_out=tensor([[ 0.0237, -0.1945, -0.0085, -0.4024, -0.2667]],
grad_fn=<AddmmBackward0>)
max_diff=6.109476089477539e-07
Questions¶
TODO Answer the following questions. Write your answers in the appropriate variables in the module hw2/answers.py.
from cs236781.answers import display_answer
import hw2.answers
Question 1¶
Consider the bottleneck block from the right side of the ResNet diagram above. Compare it to a regular block that performs a two 3x3 convs directly on the 256-channel input (i.e. as shown in the left side of the diagram, with a different number of channels). Explain the differences between the regular block and the bottleneck block in terms of:
- Number of parameters. Calculate the exact numbers for these two examples.
- Number of floating point operations required to compute an output (qualitative assessment).
- Ability to combine the input: (1) spatially (within feature maps); (2) across feature maps.
display_answer(hw2.answers.part4_q1)
Your answer: Q1.1.
Calculation of the number of parameters of regular block:
The regular block has two 3X3 convolutional layers directly operating on a 256-channel input therfore the number of paramas in first layer is:
3 X 3 X 256 X 256 + 256 (bias) = 590080
Second layer is exactly the same, So the final result of the total number of parameters is:
2 X 590080 = 1180160 parameters
Calculation of the number of parameters of bottleneck block:
First layer 1x1 conv 256 to 64:
1 X 1 X 256 X 64 + 64(bias) = 16,448
Second layer 3x3 conv 64 to 64:
3 X 3 X 64 X 64 + 64(bias) = 36,928
Final layer 1x1 conv 64 to 256:
1 X 1 X 64 X 256 + 256(bias) = 16,640
Total number of parameters: 16,448 + 36,928 + 16,640 = 70,016
- Number of floating point operations can be approximately calculated in the following way: For the regular block:
2(convolution layer) X H X W (spatial input size) X 3 X 3 (conv kernal size) X 256 (input channels) X 256 (output channels) = H X W X 1,179,648
For the bottleneck block:
First layer: H X W (spatial input size) X 1 X 1 (conv kernal size) X 256 (input channels) X 64 (output channels)
Second layer: H X W (spatial input size) X 3 X 3 (conv kernal size) X 64 (input channels) X 64 (output channels)
Final layer : H X W (spatial input size) X 1 X 1 (conv kernal size) X 64 (input channels) X 256 (output channels)
Total Flops : H X W X 69,632
The regular block and the bottleneck block differ in their ability to combine input spatially (within feature maps) and across feature maps. The regular block uses two 3×3 convolutions, both operating on the full number of input channels (256). This allows it to effectively combine spatial information within feature maps over a larger receptive field (equivalent to 5×5 for the two stacked layers). As it retains the full number of channels throughout, the regular block is better suited for capturing fine spatial details and dependencies.
On the other hand, the bottleneck block includes one 3×3 convolution for spatial combination, but it operates on a reduced number of feature maps (64, reduced from 256 by a preceding 1×1 convolution). While the effective receptive field is still 5×5, the bottleneck block is less rich in spatial combination due to the dimensionality reduction. However, this reduction also makes the bottleneck block computationally more efficient.
Where the bottleneck block excels is in combining information across feature maps. The 1×1 convolutions in the bottleneck block allow for selective compression and expansion of features, enabling more flexibility in feature maps and gives less weight to redundant ones. In contrast, the regular block lacks this explicit feature selection and recombination mechanism, as both its convolutions are spatially focused.
In summary, the regular block is better at spatially combining information within feature maps, while the bottleneck block is more effective at combining information across feature maps as a result of 1×1 convolutions.
$$ \newcommand{\mat}[1]{\boldsymbol {#1}} \newcommand{\mattr}[1]{\boldsymbol {#1}^\top} \newcommand{\matinv}[1]{\boldsymbol {#1}^{-1}} \newcommand{\vec}[1]{\boldsymbol {#1}} \newcommand{\vectr}[1]{\boldsymbol {#1}^\top} \newcommand{\rvar}[1]{\mathrm {#1}} \newcommand{\rvec}[1]{\boldsymbol{\mathrm{#1}}} \newcommand{\diag}{\mathop{\mathrm {diag}}} \newcommand{\set}[1]{\mathbb {#1}} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \newcommand{\pderiv}[2]{\frac{\partial #1}{\partial #2}} \newcommand{\bb}[1]{\boldsymbol{#1}} $$
Part 5: Convolutional Architecture Experiments¶
In this part we will explore convolution networks and the effects of their architecture on accuracy. We'll use our deep CNN implementation and perform various experiments on it while varying the architecture. Then we'll implement our own custom architecture to see whether we can get high classification results on a large subset of CIFAR-10.
Training will be performed on GPU.
import os
import re
import sys
import glob
import numpy as np
import matplotlib.pyplot as plt
import unittest
import torch
import torchvision
import torchvision.transforms as tvtf
%matplotlib inline
%load_ext autoreload
%autoreload 2
seed = 42
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
plt.rcParams.update({'font.size': 12})
test = unittest.TestCase()
Experimenting with model architectures¶
We will now perform a series of experiments that train various model configurations on a part of the CIFAR-10 dataset.
To perform the experiments, you'll need to use a machine with a GPU since training time might be too long otherwise.
Note about running on GPUs¶
Here's an example of running a forward pass on the GPU (assuming you're running this notebook on a GPU-enabled machine).
from hw2.cnn import ResNet
net = ResNet(
in_size=(3,100,100), out_classes=10, channels=[32, 64]*3,
pool_every=4, hidden_dims=[100]*2,
pooling_type='avg', pooling_params=dict(kernel_size=2),
)
net = net.to(device)
test_image = torch.randint(low=0, high=256, size=(3, 100, 100), dtype=torch.float).unsqueeze(0)
test_image = test_image.to(device)
test_out = net(test_image)
Notice how we called .to(device) on both the model and the input tensor.
Here the device is a torch.device object that we created above. If an nvidia GPU is available on the machine you're running this on, the device will be 'cuda'. When you run .to(device) on a model, it recursively goes over all the model parameter tensors and copies their memory to the GPU. Similarly, calling .to(device) on the input image also copies it.
In order to train on a GPU, you need to make sure to move all your tensors to it. You'll get errors if you try to mix CPU and GPU tensors in a computation.
print(f'This notebook is running with device={device}')
print(f'The model parameter tensors are also on device={next(net.parameters()).device}')
print(f'The test image is also on device={test_image.device}')
print(f'The output is therefore also on device={test_out.device}')
This notebook is running with device=cuda The model parameter tensors are also on device=cuda:0 The test image is also on device=cuda:0 The output is therefore also on device=cuda:0
Notes on using course servers¶
First, please read the course servers guide carefully.
To run the experiments on the course servers, you can use the py-sbatch.sh script directly to perform a single experiment run in batch mode (since it runs python once), or use the srun command to do a single run in interactive mode. For example, running a single run of experiment 1 interactively (after conda activate of course):
srun -c 2 --gres=gpu:1 --pty python -m hw2.experiments run-exp -n test -K 32 64 -L 2 -P 2 -H 100
To perform multiple runs in batch mode with sbatch (e.g. for running all the configurations of an experiments), you can create your own script based on py-sbatch.sh and invoke whatever commands you need within it.
Don't request more than 2 CPU cores and 1 GPU device for your runs. The code won't be able to utilize more than that anyway, so you'll see no performance gain if you do. It will only cause delays for other students using the servers.
General notes for running experiments¶
- You can run the experiments on a different machine (e.g. the course servers) and copy the results (files)
to the
resultsfolder on your local machine. This notebook will only display the results, not run the actual experiment code (except for a demo run). - It's important to give each experiment run a name as specified by the notebook instructions later on.
Each run has a
run_nameparameter that will also be the base name of the results file which this notebook will expect to load. - You will implement the code to run the experiments in the
hw2/experiments.pymodule. This module has a CLI parser so that you can invoke it as a script and pass in all the configuration parameters for a single experiment run. - You should use
python -m hw2.experiments run-expto run an experiment, and notpython hw2/experiments.py run-exp, regardless of how/where you run it.
Experiment 1: Network depth and number of filters¶
In this part we will test some different architecture configurations based on our CNN and ResNet.
Specifically, we want to try different depths and number of features to see the effects these parameters have on the model's performance.
To do this, we'll define two extra hyperparameters for our model, K (filters_per_layer) and L (layers_per_block).
Kis a list, containing the number of filters we want to have in our conv layers.Lis the number of consecutive layers with the same number of filters to use.
For example, if K=[32, 64] and L=2 it means we want two conv layers with 32 filters followed by two conv layers with 64 filters. If we also use pool_every=3, the feature-extraction part of our model will be:
Conv(X,32)->ReLu->Conv(32,32)->ReLU->Conv(32,64)->ReLU->MaxPool->Conv(64,64)->ReLU
We'll try various values of the K and L parameters in combination and see how each architecture trains. All other hyperparameters are up to you, including the choice of the optimization algorithm, the learning rate, regularization and architecture hyperparams such as pool_every and hidden_dims. Note that you should select the pool_every parameter wisely per experiment so that you don't end up with zero-width feature maps.
You can try some short manual runs to determine some good values for the hyperparameters or implement cross-validation to do it. However, the dataset size you test on should be large. If you limit the number of batches, make sure to use at least 30000 training images and 5000 validation images.
The important thing is that you state what you used, how you decided on it, and explain your results based on that.
First we need to write some code to run the experiment.
TODO:
- Implement the
cnn_experiment()function in thehw2/experiments.pymodule. - If you haven't done so already, it would be an excellent idea to implement the early stopping feature of the
Trainerclass.
The following block tests that your implementation works. It's also meant to show you that each experiment run creates a result file containing the parameters to reproduce and the FitResult object for plotting.
from hw2.experiments import load_experiment, cnn_experiment
from cs236781.plot import plot_fit
# Test experiment1 implementation on a few data samples and with a small model
cnn_experiment(
'test_run', seed=seed, bs_train=50, batches=10, epochs=10, early_stopping=5,
filters_per_layer=[32,64], layers_per_block=1, pool_every=1, hidden_dims=[100],
model_type='resnet',
)
# There should now be a file 'test_run.json' in your `results/` folder.
# We can use it to load the results of the experiment.
cfg, fit_res = load_experiment('results/test_run_L1_K32-64.json')
_, _ = plot_fit(fit_res, train_test_overlay=True)
# And `cfg` contains the exact parameters to reproduce it
print('experiment config: ', cfg)
Files already downloaded and verified Files already downloaded and verified --- EPOCH 1/10 ---
train_batch: 0%| | 0/10 [00:00<?, ?it/s]
test_batch: 0%| | 0/10 [00:00<?, ?it/s]
--- EPOCH 2/10 ---
train_batch: 0%| | 0/10 [00:00<?, ?it/s]
test_batch: 0%| | 0/10 [00:00<?, ?it/s]
--- EPOCH 3/10 ---
train_batch: 0%| | 0/10 [00:00<?, ?it/s]
test_batch: 0%| | 0/10 [00:00<?, ?it/s]
--- EPOCH 4/10 ---
train_batch: 0%| | 0/10 [00:00<?, ?it/s]
test_batch: 0%| | 0/10 [00:00<?, ?it/s]
--- EPOCH 5/10 ---
train_batch: 0%| | 0/10 [00:00<?, ?it/s]
test_batch: 0%| | 0/10 [00:00<?, ?it/s]
--- EPOCH 6/10 ---
train_batch: 0%| | 0/10 [00:00<?, ?it/s]
test_batch: 0%| | 0/10 [00:00<?, ?it/s]
--- EPOCH 7/10 ---
train_batch: 0%| | 0/10 [00:00<?, ?it/s]
test_batch: 0%| | 0/10 [00:00<?, ?it/s]
--- EPOCH 8/10 ---
train_batch: 0%| | 0/10 [00:00<?, ?it/s]
test_batch: 0%| | 0/10 [00:00<?, ?it/s]
--- EPOCH 9/10 ---
train_batch: 0%| | 0/10 [00:00<?, ?it/s]
test_batch: 0%| | 0/10 [00:00<?, ?it/s]
--- EPOCH 10/10 ---
train_batch: 0%| | 0/10 [00:00<?, ?it/s]
test_batch: 0%| | 0/10 [00:00<?, ?it/s]
*** Output file ./results/test_run_L1_K32-64.json written
experiment config: {'run_name': 'test_run', 'out_dir': './results', 'seed': 42, 'device': None, 'bs_train': 50, 'bs_test': 12, 'batches': 10, 'epochs': 10, 'early_stopping': 5, 'checkpoints': None, 'lr': 0.001, 'reg': 0.001, 'filters_per_layer': [32, 64], 'layers_per_block': 1, 'pool_every': 1, 'hidden_dims': [100], 'model_type': 'resnet', 'activation': 'relu', 'activation_params': {}, 'batch_norm': False, 'dropout': 0.4, 'bottleneck': False, 'pooling_type': 'avg', 'pooling_params': {'kernel_size': 2}, 'conv_params': {'kernel_size': 2, 'stride': 1, 'padding': 1}, 'kw': {}}
We'll use the following function to load multiple experiment results and plot them together.
def plot_exp_results(filename_pattern, results_dir='results'):
fig = None
result_files = glob.glob(os.path.join(results_dir, filename_pattern))
result_files.sort()
if len(result_files) == 0:
print(f'No results found for pattern {filename_pattern}.', file=sys.stderr)
return
for filepath in result_files:
m = re.match('exp\d_(\d_)?(.*)\.json', os.path.basename(filepath))
cfg, fit_res = load_experiment(filepath)
fig, axes = plot_fit(fit_res, fig, legend=m[2],log_loss=True)
del cfg['filters_per_layer']
del cfg['layers_per_block']
print('common config: ', cfg)
Our code¶
from hw2.experiments import load_experiment, cnn_experiment
from cs236781.plot import plot_fit
def run_experiments(k_list, L_list, seed,experiment_name = None ,bs_train=50, batches=10, epochs=10, early_stopping=5, pool_every=2, hidden_dims=[100], model_type='resnet'):
"""
Run CNN experiments for all combinations of filters per layer (k) and layers per block (L).
Args:
k_list (list): List of filters per layer values.
L_list (list): List of layers per block values.
seed (int): Random seed for reproducibility.
bs_train (int): Batch size for training.
batches (int): Number of batches per epoch.
epochs (int): Number of epochs.
early_stopping (int): Early stopping patience.
pool_every (int): Pooling frequency.
hidden_dims (list): List of hidden dimensions for the model.
model_type (str): Type of model (e.g., 'resnet').
"""
for L in L_list:
for k in k_list:
# Run the CNN experiment
cnn_experiment(
experiment_name,
seed=seed,
bs_train=bs_train,
batches=batches,
epochs=epochs,
early_stopping=early_stopping,
filters_per_layer=[k] if not isinstance(k, list) else k,
layers_per_block=L,
pool_every=pool_every,
hidden_dims=hidden_dims,
model_type=model_type,
)
# # Load the results of the experiment
# results_file = f'results/{experiment_name}_L{L}_K{k}.json'
# cfg, fit_res = load_experiment(results_file)
# # Plot the fitting results
# _, _ = plot_fit(fit_res, train_test_overlay=True)
# # Print the experiment configuration for reference
# print(f'Experiment config for L={L} and K={k}: ', cfg)
Experiment 1.1: Varying the network depth (L)¶
First, we'll test the effect of the network depth on training.
Configuratons:
K=32fixed, withL=2,4,8,16varying per runK=64fixed, withL=2,4,8,16varying per run
So 8 different runs in total.
Naming runs:
Each run should be named exp1_1_L{}_K{} where the braces are placeholders for the values. For example, the first run should be named exp1_1_L2_K32.
TODO: Run the experiment on the above configuration with the CNN model. Make sure the result file names are as expected. Use the following blocks to display the results.
Our Code¶
# Define the lists for filters per layer (k) and layers per block (L)
k_list = [32, 64]
L_list = [2,4,8,16]
seed = 42
# Run the experiments
run_experiments(k_list= k_list,
L_list=L_list,
seed=seed,
experiment_name = "exp1_1",
bs_train=128,
batches=250,
epochs=10,
early_stopping=5,
pool_every=3,
hidden_dims=[100],
model_type='cnn')
plot_exp_results('exp1_1_L*_K32*.json')
common config: {'run_name': 'exp1_1', 'out_dir': './results', 'seed': 42, 'device': None, 'bs_train': 50, 'bs_test': 12, 'batches': 250, 'epochs': 10, 'early_stopping': 5, 'checkpoints': None, 'lr': 0.001, 'reg': 0.001, 'pool_every': 3, 'hidden_dims': [100], 'model_type': 'cnn', 'activation': 'relu', 'activation_params': {}, 'batch_norm': False, 'dropout': 0.4, 'bottleneck': False, 'pooling_type': 'avg', 'pooling_params': {'kernel_size': 2}, 'conv_params': {'kernel_size': 2, 'stride': 1, 'padding': 1}, 'kw': {}}
plot_exp_results('exp1_1_L*_K64*.json')
common config: {'run_name': 'exp1_1', 'out_dir': './results', 'seed': 42, 'device': None, 'bs_train': 50, 'bs_test': 12, 'batches': 250, 'epochs': 10, 'early_stopping': 5, 'checkpoints': None, 'lr': 0.001, 'reg': 0.001, 'pool_every': 3, 'hidden_dims': [100], 'model_type': 'cnn', 'activation': 'relu', 'activation_params': {}, 'batch_norm': False, 'dropout': 0.4, 'bottleneck': False, 'pooling_type': 'avg', 'pooling_params': {'kernel_size': 2}, 'conv_params': {'kernel_size': 2, 'stride': 1, 'padding': 1}, 'kw': {}}
Experiment 1.2: Varying the number of filters per layer (K)¶
Now we'll test the effect of the number of convolutional filters in each layer.
Configuratons:
L=2fixed, withK=[32],[64],[128]varying per run.L=4fixed, withK=[32],[64],[128]varying per run.L=8fixed, withK=[32],[64],[128]varying per run.
So 9 different runs in total. To clarify, each run K takes the value of a list with a single element.
Naming runs:
Each run should be named exp1_2_L{}_K{} where the braces are placeholders for the values. For example, the first run should be named exp1_2_L2_K32.
TODO: Run the experiment on the above configuration with the CNN model. Make sure the result file names are as expected. Use the following blocks to display the results.
# Define the lists for filters per layer (k) and layers per block (L)
k_list = [32, 64, 128]
L_list = [2,4,8]
seed = 42
# Run the experiments
run_experiments(k_list= k_list,
L_list=L_list,
seed=seed,
experiment_name = "exp1_2",
bs_train=128,
batches=250,
epochs=10,
early_stopping=5,
pool_every=3,
hidden_dims=[100],
model_type='cnn')
plot_exp_results('exp1_2_L2*.json')
common config: {'run_name': 'exp1_2', 'out_dir': './results', 'seed': 42, 'device': None, 'bs_train': 50, 'bs_test': 12, 'batches': 250, 'epochs': 10, 'early_stopping': 5, 'checkpoints': None, 'lr': 0.001, 'reg': 0.001, 'pool_every': 3, 'hidden_dims': [100], 'model_type': 'cnn', 'activation': 'relu', 'activation_params': {}, 'batch_norm': False, 'dropout': 0.4, 'bottleneck': False, 'pooling_type': 'avg', 'pooling_params': {'kernel_size': 2}, 'conv_params': {'kernel_size': 2, 'stride': 1, 'padding': 1}, 'kw': {}}
plot_exp_results('exp1_2_L4*.json')
common config: {'run_name': 'exp1_2', 'out_dir': './results', 'seed': 42, 'device': None, 'bs_train': 50, 'bs_test': 12, 'batches': 250, 'epochs': 10, 'early_stopping': 5, 'checkpoints': None, 'lr': 0.001, 'reg': 0.001, 'pool_every': 3, 'hidden_dims': [100], 'model_type': 'cnn', 'activation': 'relu', 'activation_params': {}, 'batch_norm': False, 'dropout': 0.4, 'bottleneck': False, 'pooling_type': 'avg', 'pooling_params': {'kernel_size': 2}, 'conv_params': {'kernel_size': 2, 'stride': 1, 'padding': 1}, 'kw': {}}
plot_exp_results('exp1_2_L8*.json')
common config: {'run_name': 'exp1_2', 'out_dir': './results', 'seed': 42, 'device': None, 'bs_train': 50, 'bs_test': 12, 'batches': 250, 'epochs': 10, 'early_stopping': 5, 'checkpoints': None, 'lr': 0.001, 'reg': 0.001, 'pool_every': 3, 'hidden_dims': [100], 'model_type': 'cnn', 'activation': 'relu', 'activation_params': {}, 'batch_norm': False, 'dropout': 0.4, 'bottleneck': False, 'pooling_type': 'avg', 'pooling_params': {'kernel_size': 2}, 'conv_params': {'kernel_size': 2, 'stride': 1, 'padding': 1}, 'kw': {}}
Experiment 1.3: Varying both the number of filters (K) and network depth (L)¶
Now we'll test the effect of the number of convolutional filters in each layer.
Configuratons:
K=[64, 128]fixed withL=2,3,4varying per run.
So 3 different runs in total. To clarify, each run K takes the value of an array with a two elements.
Naming runs:
Each run should be named exp1_3_L{}_K{}-{} where the braces are placeholders for the values. For example, the first run should be named exp1_3_L2_K64-128.
# Define the lists for filters per layer (k) and layers per block (L)
k_list = [[64, 128]]
L_list = [2,3,4]
seed = 42
# Run the experiments
run_experiments(k_list= k_list,
L_list=L_list,
seed=seed,
experiment_name = "exp1_3",
bs_train=128,
batches=250,
epochs=10,
early_stopping=5,
pool_every=3,
hidden_dims=[100],
model_type='cnn')
TODO: Run the experiment on the above configuration with the CNN model. Make sure the result file names are as expected. Use the following blocks to display the results.
plot_exp_results('exp1_3*.json')
common config: {'run_name': 'exp1_3', 'out_dir': './results', 'seed': 42, 'device': None, 'bs_train': 128, 'bs_test': 32, 'batches': 250, 'epochs': 10, 'early_stopping': 5, 'checkpoints': None, 'lr': 0.001, 'reg': 0.001, 'pool_every': 3, 'hidden_dims': [100], 'model_type': 'cnn', 'activation': 'relu', 'activation_params': {}, 'batch_norm': False, 'dropout': 0.4, 'bottleneck': False, 'pooling_type': 'avg', 'pooling_params': {'kernel_size': 2}, 'conv_params': {'kernel_size': 2, 'stride': 1, 'padding': 1}, 'kw': {}}
Experiment 1.4: Adding depth with Residual Networks¶
Now we'll test the effect of skip connections on the training and performance.
Configuratons:
K=[32]fixed withL=8,16,32varying per run.K=[64, 128, 256]fixed withL=2,4,8varying per run.
So 6 different runs in total.
Naming runs:
Each run should be named exp1_4_L{}_K{}-{}-{} where the braces are placeholders for the values.
TODO: Run the experiment on the above configuration with the ResNet model. Make sure the result file names are as expected. Use the following blocks to display the results.
# Define the lists for filters per layer (k) and layers per block (L)
k_list = [32]
L_list = [8,16,32]
seed = 42
# Run the experiments
run_experiments(k_list= k_list,
L_list=L_list,
seed=seed,
experiment_name = "exp1_4",
bs_train=128,
batches=250,
epochs=10,
early_stopping=5,
pool_every=10,
hidden_dims=[100],
model_type='resnet')
# Second Experiment
k_list = [[64, 128, 256]]
L_list = [2,4,8]
seed = 42
# Run the experiments
run_experiments(k_list= k_list,
L_list=L_list,
seed=seed,
experiment_name = "exp1_4",
bs_train=128,
batches=250,
epochs=10,
early_stopping=5,
pool_every=10,
hidden_dims=[100],
model_type='resnet')
plot_exp_results('exp1_4_L*_K32.json')
common config: {'run_name': 'exp1_4', 'out_dir': './results', 'seed': 42, 'device': None, 'bs_train': 128, 'bs_test': 32, 'batches': 250, 'epochs': 10, 'early_stopping': 5, 'checkpoints': None, 'lr': 0.001, 'reg': 0.001, 'pool_every': 10, 'hidden_dims': [100], 'model_type': 'resnet', 'activation': 'relu', 'activation_params': {}, 'batch_norm': False, 'dropout': 0.4, 'bottleneck': False, 'pooling_type': 'avg', 'pooling_params': {'kernel_size': 2}, 'conv_params': {'kernel_size': 2, 'stride': 1, 'padding': 1}, 'kw': {}}
plot_exp_results('exp1_4_L*_K64*.json')
common config: {'run_name': 'exp1_4', 'out_dir': './results', 'seed': 42, 'device': None, 'bs_train': 128, 'bs_test': 32, 'batches': 250, 'epochs': 10, 'early_stopping': 5, 'checkpoints': None, 'lr': 0.001, 'reg': 0.001, 'pool_every': 10, 'hidden_dims': [100], 'model_type': 'resnet', 'activation': 'relu', 'activation_params': {}, 'batch_norm': False, 'dropout': 0.4, 'bottleneck': False, 'pooling_type': 'avg', 'pooling_params': {'kernel_size': 2}, 'conv_params': {'kernel_size': 2, 'stride': 1, 'padding': 1}, 'kw': {}}
Questions¶
TODO Answer the following questions. Write your answers in the appropriate variables in the module hw2/answers.py.
from cs236781.answers import display_answer
import hw2.answers
Question 1¶
Analyze your results from experiment 1.1. In particular,
- Explain the effect of depth on the accuracy. What depth produces the best results and why do you think that's the case?
- Were there values of
Lfor which the network wasn't trainable? what causes this? Suggest two things which may be done to resolve it at least partially.
display_answer(hw2.answers.part5_q1)
Your answer:
Part1:
We clearly see that we seam to reach peak performance at 2 layers, and we think this is because of mainly two things. First, we reduced the dataset to around 128*250 = 320000 training examples and and about 5400 test samples, so maybe in a sense 4 layers or more is way too much parameters to train like we saw in class given this experiment was done using a basic cnn. Moreover, we first tried using all 60k samples for training and 10k for testing and actually L = 4 came out on top. We reduced the dataset for runtime purposes. So we conclude that with enough samples, more layers should be better given unbound or very large compute power and time.
Part2: We clearly see that for L=8,16 the network wan't trainable . This may be due maybe vanishing gradients, or the fact that the deeper the network the more pooling we do with this setup. Maybe the "information loss" is way too much together with vanishing gradients for a network to be trainable.
Question 2¶
Analyze your results from experiment 1.2. In particular, compare to the results of experiment 1.1.
display_answer(hw2.answers.part5_q2)
Your answer: First of all, compared to experiment 1_1, in this experiment we get a significant bump in training accuracy and a mild bump in test accuracy across the board. Interestingly, in this case we see that more kernels are better given more layers. which should mke sense since the kernels maybe are not "utilized" enough with low layers like 2 or 3, yet with 4 they help way more. All networks here are trainable , since they are not that deep and the kernels theselves aren't effected by the problems stated before for the same reason.
Question 3¶
Analyze your results from experiment 1.3.
display_answer(hw2.answers.part5_q3)
Your answer:
We can see clearly here that 2 layers perform way better than 3,4. This maybe due to various factors, we think that maybe this is caused by the first layer extracting some "mapping" and the rest of the layers are just added noise and trainable parameters that are dependant at training time on the many kernels applied beforehand. We have this suspision since for L = 3,4 the accuracy for train and test time is 10%, which isn't good, not that much better than guessing 1 out of 10 classes.
Question 4¶
Analyze your results from experiment 1.4. Compare to experiment 1.1 and 1.3.
display_answer(hw2.answers.part5_q4)
Your answer:
This time we are using a ResNet architecture. As shown in the plots, in the first part, there is no vanishing gradients problem. we have that with k = 32 , the more layers the better. There is no vanishing gradients problem. Notice that even though the performance on the test set stayed the same compare to 1_1 and 1_3 the same. the training accuracy is closer to the test accuracy, suggesting way less overfitting.
In the second part , with way higher K, we have that for L = 4 the performance peaked. Specifically the train and test accuracy are way better than 1_1 and 1_3. We conclude that there is a specific balance to be maintained between K and L. Not a linear relation.
$$ \newcommand{\mat}[1]{\boldsymbol {#1}} \newcommand{\mattr}[1]{\boldsymbol {#1}^\top} \newcommand{\matinv}[1]{\boldsymbol {#1}^{-1}} \newcommand{\vec}[1]{\boldsymbol {#1}} \newcommand{\vectr}[1]{\boldsymbol {#1}^\top} \newcommand{\rvar}[1]{\mathrm {#1}} \newcommand{\rvec}[1]{\boldsymbol{\mathrm{#1}}} \newcommand{\diag}{\mathop{\mathrm {diag}}} \newcommand{\set}[1]{\mathbb {#1}} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \newcommand{\pderiv}[2]{\frac{\partial #1}{\partial #2}} \newcommand{\bb}[1]{\boldsymbol{#1}} $$
Part 6: YOLO - Objects Detection¶
In this part we will use an object detection architecture called YOLO (You only look once) to detect objects in images. We'll use an already trained model weights (v5) found here: https://github.com/ultralytics/yolov5
import torch
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# Load the YOLO model
model = torch.hub.load("ultralytics/yolov5", "yolov5s")
model.to(device)
# Images
img1 = 'imgs/DolphinsInTheSky.jpg'
img2 = 'imgs/cat-shiba-inu-2.jpg'
Using cache found in C:\Users\tomer/.cache\torch\hub\ultralytics_yolov5_master YOLOv5 2025-1-11 Python-3.8.12 torch-1.10.1 CUDA:0 (NVIDIA GeForce RTX 3080, 10240MiB) Fusing layers... YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients, 16.4 GFLOPs Adding AutoShape...
Inference with YOLO¶
You are provided with 2 images (img1 and img2). TODO:
- Detect objects using the YOLOv5 model for these 2 images.
- Print the inference output with bounding boxes.
- Look at the inference results and answer the question below.
#Insert the inference code here.
with torch.no_grad():
results1 = model(img1)
results2 = model(img2)
# Print results with bounding boxes for each image
print("Results for img1 (DolphinsInTheSky):")
results1.print() # Prints detected objects and their confidence scores
print("\nResults for img2 (cat-shiba-inu-2):")
results2.print()
# Show images with bounding boxes
results1.show()
results2.show()
image 1/1: 183x275 2 persons, 1 surfboard Speed: 11.0ms pre-process, 11.0ms inference, 207.3ms NMS per image at shape (1, 3, 448, 640) image 1/1: 750x750 2 cats, 1 dog Speed: 8.0ms pre-process, 10.0ms inference, 57.8ms NMS per image at shape (1, 3, 640, 640)
Results for img1 (DolphinsInTheSky): Results for img2 (cat-shiba-inu-2):
Question 1¶
Analyze the inference results of the 2 images.
- How well did the model detect the objects in the pictures? with what confidance?
- What can possibly be the reason for the model failures? suggest methods to resolve that issue.
- recall that we learned how to fool a model by adverserial attack (PGD), describe how you would attack an Object Detection model (such as YOLO).
from cs236781.answers import display_answer
import hw2.answers
display_answer(hw2.answers.part6_q1)
Your answer: Q1
The model performance was poor. The model did misclassifications such as identifying dolphins as persons and failed to differentiate between dogs and cats. Also the confidence was relatively low in most of the classifications.
Firstly the yolo5s is trained on the COCO dataset, which has predefined classe. COCO does not include a specific class for dolphins, so the model tries to match dolphins to the closest available class, often person due to the similarity in shape. also yolov5s is a lightweight version designed for speed, which sacrifices some accuracy. Differentiating between dogs and cats can be challenging for models trained on COCO, especially in unusual contexts, poses, or lighting.
If we are not constrained by hardware or inference time, we can use larger YOLOv5 models. These models are more accurate but require more computational resources. We can also clone the model and train it ourselves with a specialized labeled dataset that includes classes like dolphins, dogs, and cats in various positions and lighting conditions.
To attack an object detection model like YOLO using PGD, the goal is to generate small perturbations in the input image that mislead the model's predictions. YOLO relies on both classification and bounding box predictions. In PGD, the process starts by taking a copy of the original image, then iteratively adjusting it using the gradient of the loss function, which increases the model's error. The perturbation is constrained to avoid noticeable changes to the image while causing incorrect predictions.
The attack targets both the classification (misclassifying objects) and localization (distorting bounding boxes or missing objects). After several iterations, the perturbations are refined to maximize the model's failure in detecting or correctly classifying objects. The effectiveness of the attack is evaluated by checking whether the model misclassifies, fails to detect, or inaccurately locates objects in the image.
Creative Detection Failures¶
Object detection pitfalls could be, for example: occlusion - when the objects are partially occlude, and thus missing important features, model bias - when a model learn some bias about an object, it could recognize it as something else in a different setup, and many others like Deformation, Illumination conditions, Cluttered or textured background and blurring due to moving objects.
TODO: Take pictures and that demonstrates 3 of the above object detection pitfalls, run inference and analyze the results.
#Insert the inference code here.
# Images
img1 = 'imgs/camu_cat.png'
img2 = 'imgs/cat_that_looks_like_dog.PNG'
img3 = 'imgs/blured_cat.PNG'
with torch.no_grad():
results1 = model(img1)
results2 = model(img2)
results3 = model(img3)
# Show images with bounding boxes
results1.show()
results2.show()
results3.show()
Question 3¶
Analyize the results of the inference.
- How well did the model detect the objects in the pictures? explain.
display_answer(hw2.answers.part6_q3)
Your answer:
The model did terrible. Most humans can recognize that these pictures are of cats.(maybe except the third image). The model has mistaken the cat in the first pictured to a sheep because of fur and the white carpet. It misclassified the cat to a dog, because it really resembles a dog but you can clearly see cat features. Somehow it misclassified the cat in the third picture to a toilet... probably because of the blur.
Bonus¶
Try improving the model performance over poorly recognized images by changing them. Describe the manipulations you did to the pictures.
img1 = 'imgs/camu_cat_2.png'
img2 = 'imgs/cat_that_looks_like_dog_2.PNG'
img3 = 'imgs/blured_cat_2.PNG'
with torch.no_grad():
results1 = model(img1)
results2 = model(img2)
results3 = model(img3)
# Show images with bounding boxes
results1.show()
results2.show()
results3.show()
display_answer(hw2.answers.part6_bonus)
Your answer:
By removing the background and adding distinct features, like cat eyes, we effectively made the cat more recognizable. The absence of a distracting or camouflaging background allowed the model to focus solely on the key features of the cat, improving recognition accuracy.
We changed the cat nose and ears to make it resemble a cat better.
we added a cat head and removed the blur a litle bit.
This changes made the model recognize the cat correctly.